Test Report: Docker_Linux_crio 22427

                    
                      f815509b9ccb41a33be05aa7241c338e7909bf25:2026-01-10:43184
                    
                

Test fail (26/332)

x
+
TestAddons/serial/Volcano (0.24s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-910183 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-910183 addons disable volcano --alsologtostderr -v=1: exit status 11 (240.346042ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:21:34.678490   16925 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:21:34.678810   16925 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:21:34.678819   16925 out.go:374] Setting ErrFile to fd 2...
	I0110 08:21:34.678823   16925 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:21:34.678996   16925 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:21:34.679258   16925 mustload.go:66] Loading cluster: addons-910183
	I0110 08:21:34.679571   16925 config.go:182] Loaded profile config "addons-910183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:21:34.679585   16925 addons.go:622] checking whether the cluster is paused
	I0110 08:21:34.679670   16925 config.go:182] Loaded profile config "addons-910183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:21:34.679682   16925 host.go:66] Checking if "addons-910183" exists ...
	I0110 08:21:34.680067   16925 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Status}}
	I0110 08:21:34.698059   16925 ssh_runner.go:195] Run: systemctl --version
	I0110 08:21:34.698130   16925 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:21:34.715916   16925 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa Username:docker}
	I0110 08:21:34.808401   16925 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 08:21:34.808501   16925 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 08:21:34.838796   16925 cri.go:96] found id: "52e1025c4414838af6827ca4e9af267b60d7bab029698821e94b8c8c53b47a02"
	I0110 08:21:34.838815   16925 cri.go:96] found id: "75942fc7d47191bb325503f8137ff909131d1c4d52d50d6647a07624163824bf"
	I0110 08:21:34.838819   16925 cri.go:96] found id: "1f0e0366214d13effb5e1a2fdd2281ce1ab7895177c2879232a660b16f2f36f6"
	I0110 08:21:34.838822   16925 cri.go:96] found id: "ad41fe8eb72ea17b3787fd8b9148b6f74bcbae10876696c1004d2af9067e974a"
	I0110 08:21:34.838825   16925 cri.go:96] found id: "b8196c6709973cd145e81bee513c9295c75a07f339edd1849b8f09eeca3a8902"
	I0110 08:21:34.838828   16925 cri.go:96] found id: "e283b96a0148a542df9ef8145310402e66db65dde0eeb3d91a74bffd800c43b6"
	I0110 08:21:34.838831   16925 cri.go:96] found id: "a3befa7150ca579269272406feaeddcf602e63b5aa269f6a4eb71f0400cf9f65"
	I0110 08:21:34.838833   16925 cri.go:96] found id: "ba1fcf19b0afca5f042f84fda8060304660e32ffcb595b679580f3f1681e8b1b"
	I0110 08:21:34.838836   16925 cri.go:96] found id: "e315fa8c4e521b6643d8334e448d62113f1d082439e58e1761e61d37e8c0adab"
	I0110 08:21:34.838840   16925 cri.go:96] found id: "60d2da1df937e39c979adac7502a680da7554008aebe24a90289fa081d48066c"
	I0110 08:21:34.838843   16925 cri.go:96] found id: "a26f210429ce8f81957c43d13ae7dc0c7795a1237b391a18c23dc21ee3dac83a"
	I0110 08:21:34.838847   16925 cri.go:96] found id: "ac0bb3017c2df5407dc70b2065a8642f17f4632f0eb519eae225e58db88cb7d7"
	I0110 08:21:34.838849   16925 cri.go:96] found id: "80376415781ced2a18f3b49817b93eb4bbe7ad36f4493427b2dcf4382ff8817b"
	I0110 08:21:34.838854   16925 cri.go:96] found id: "f180f3ed8bb6409a8995534d80f2ad8dbca483a62f67c39d69e4883e5841f7b7"
	I0110 08:21:34.838859   16925 cri.go:96] found id: "2149b85c56959d7cc9fe04d1ce74acdf5e9e313ff0eb11e6bab9ca79b3973900"
	I0110 08:21:34.838867   16925 cri.go:96] found id: "82bfb461b79f28c44b698878be57d4f3ed705a492181cc1e2b299581cc8e19c3"
	I0110 08:21:34.838872   16925 cri.go:96] found id: "800ef08a84703f203b5e10eeaa60cc96e4e0b501f7116ab326aec663a8d44a0a"
	I0110 08:21:34.838878   16925 cri.go:96] found id: "cc65f89692fad8f8f11138c2f42e6bc16e264609f7cf63fd4ef0448bcfbcd8a9"
	I0110 08:21:34.838883   16925 cri.go:96] found id: "c98446ea3c7dfb0a00df0216bc2d759bfb2abd9423c844e37dd879a9f1842ce2"
	I0110 08:21:34.838888   16925 cri.go:96] found id: "5f8174e666e70787ef47924deb832bd181b1187a4f93adde2719afd222583233"
	I0110 08:21:34.838892   16925 cri.go:96] found id: "273ea06f61975e2f11258602b200a4bc83fa37bbfdf217550f9a076abce1b87f"
	I0110 08:21:34.838900   16925 cri.go:96] found id: "5394108065d3cfd7430eab4005c30e45dbab412c8e88c64087470bc4bdf68d94"
	I0110 08:21:34.838905   16925 cri.go:96] found id: "b80c4916fca961561a4c4f5172bd77ec246690c99788e26501b0d53f2be91145"
	I0110 08:21:34.838913   16925 cri.go:96] found id: "f6fa6e8d4ac06e107cc11e058aaccfc1f58cc2d3bde427b80777917f1c523209"
	I0110 08:21:34.838924   16925 cri.go:96] found id: ""
	I0110 08:21:34.838970   16925 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 08:21:34.853830   16925 out.go:203] 
	W0110 08:21:34.855118   16925 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:21:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:21:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 08:21:34.855140   16925 out.go:285] * 
	* 
	W0110 08:21:34.855848   16925 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 08:21:34.857010   16925 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-910183 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.24s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 2.822274ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-788cd7d5bc-49v9n" [7bef0fc4-c5cd-407e-b2f7-9ee69fbf6b75] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002279575s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-zvnnr" [c3d911de-bdc5-41e1-ac2d-29832823bf99] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00312024s
addons_test.go:394: (dbg) Run:  kubectl --context addons-910183 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-910183 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-910183 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.343657348s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-910183 ip
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-910183 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-910183 addons disable registry --alsologtostderr -v=1: exit status 11 (236.167856ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:21:56.231485   19645 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:21:56.231749   19645 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:21:56.231759   19645 out.go:374] Setting ErrFile to fd 2...
	I0110 08:21:56.231764   19645 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:21:56.231986   19645 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:21:56.232239   19645 mustload.go:66] Loading cluster: addons-910183
	I0110 08:21:56.232567   19645 config.go:182] Loaded profile config "addons-910183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:21:56.232581   19645 addons.go:622] checking whether the cluster is paused
	I0110 08:21:56.232656   19645 config.go:182] Loaded profile config "addons-910183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:21:56.232668   19645 host.go:66] Checking if "addons-910183" exists ...
	I0110 08:21:56.233118   19645 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Status}}
	I0110 08:21:56.250252   19645 ssh_runner.go:195] Run: systemctl --version
	I0110 08:21:56.250296   19645 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:21:56.268129   19645 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa Username:docker}
	I0110 08:21:56.359220   19645 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 08:21:56.359286   19645 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 08:21:56.387708   19645 cri.go:96] found id: "52e1025c4414838af6827ca4e9af267b60d7bab029698821e94b8c8c53b47a02"
	I0110 08:21:56.387729   19645 cri.go:96] found id: "75942fc7d47191bb325503f8137ff909131d1c4d52d50d6647a07624163824bf"
	I0110 08:21:56.387748   19645 cri.go:96] found id: "1f0e0366214d13effb5e1a2fdd2281ce1ab7895177c2879232a660b16f2f36f6"
	I0110 08:21:56.387754   19645 cri.go:96] found id: "ad41fe8eb72ea17b3787fd8b9148b6f74bcbae10876696c1004d2af9067e974a"
	I0110 08:21:56.387758   19645 cri.go:96] found id: "b8196c6709973cd145e81bee513c9295c75a07f339edd1849b8f09eeca3a8902"
	I0110 08:21:56.387763   19645 cri.go:96] found id: "e283b96a0148a542df9ef8145310402e66db65dde0eeb3d91a74bffd800c43b6"
	I0110 08:21:56.387768   19645 cri.go:96] found id: "a3befa7150ca579269272406feaeddcf602e63b5aa269f6a4eb71f0400cf9f65"
	I0110 08:21:56.387771   19645 cri.go:96] found id: "ba1fcf19b0afca5f042f84fda8060304660e32ffcb595b679580f3f1681e8b1b"
	I0110 08:21:56.387774   19645 cri.go:96] found id: "e315fa8c4e521b6643d8334e448d62113f1d082439e58e1761e61d37e8c0adab"
	I0110 08:21:56.387781   19645 cri.go:96] found id: "60d2da1df937e39c979adac7502a680da7554008aebe24a90289fa081d48066c"
	I0110 08:21:56.387784   19645 cri.go:96] found id: "a26f210429ce8f81957c43d13ae7dc0c7795a1237b391a18c23dc21ee3dac83a"
	I0110 08:21:56.387787   19645 cri.go:96] found id: "ac0bb3017c2df5407dc70b2065a8642f17f4632f0eb519eae225e58db88cb7d7"
	I0110 08:21:56.387789   19645 cri.go:96] found id: "80376415781ced2a18f3b49817b93eb4bbe7ad36f4493427b2dcf4382ff8817b"
	I0110 08:21:56.387793   19645 cri.go:96] found id: "f180f3ed8bb6409a8995534d80f2ad8dbca483a62f67c39d69e4883e5841f7b7"
	I0110 08:21:56.387795   19645 cri.go:96] found id: "2149b85c56959d7cc9fe04d1ce74acdf5e9e313ff0eb11e6bab9ca79b3973900"
	I0110 08:21:56.387807   19645 cri.go:96] found id: "82bfb461b79f28c44b698878be57d4f3ed705a492181cc1e2b299581cc8e19c3"
	I0110 08:21:56.387810   19645 cri.go:96] found id: "800ef08a84703f203b5e10eeaa60cc96e4e0b501f7116ab326aec663a8d44a0a"
	I0110 08:21:56.387814   19645 cri.go:96] found id: "cc65f89692fad8f8f11138c2f42e6bc16e264609f7cf63fd4ef0448bcfbcd8a9"
	I0110 08:21:56.387817   19645 cri.go:96] found id: "c98446ea3c7dfb0a00df0216bc2d759bfb2abd9423c844e37dd879a9f1842ce2"
	I0110 08:21:56.387820   19645 cri.go:96] found id: "5f8174e666e70787ef47924deb832bd181b1187a4f93adde2719afd222583233"
	I0110 08:21:56.387823   19645 cri.go:96] found id: "273ea06f61975e2f11258602b200a4bc83fa37bbfdf217550f9a076abce1b87f"
	I0110 08:21:56.387826   19645 cri.go:96] found id: "5394108065d3cfd7430eab4005c30e45dbab412c8e88c64087470bc4bdf68d94"
	I0110 08:21:56.387830   19645 cri.go:96] found id: "b80c4916fca961561a4c4f5172bd77ec246690c99788e26501b0d53f2be91145"
	I0110 08:21:56.387832   19645 cri.go:96] found id: "f6fa6e8d4ac06e107cc11e058aaccfc1f58cc2d3bde427b80777917f1c523209"
	I0110 08:21:56.387835   19645 cri.go:96] found id: ""
	I0110 08:21:56.387869   19645 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 08:21:56.402243   19645 out.go:203] 
	W0110 08:21:56.403498   19645 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:21:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:21:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 08:21:56.403523   19645 out.go:285] * 
	* 
	W0110 08:21:56.404285   19645 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 08:21:56.405369   19645 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-910183 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (13.78s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.39s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 3.305909ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-910183
addons_test.go:334: (dbg) Run:  kubectl --context addons-910183 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-910183 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-910183 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (234.310055ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:21:56.613369   19727 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:21:56.613524   19727 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:21:56.613535   19727 out.go:374] Setting ErrFile to fd 2...
	I0110 08:21:56.613539   19727 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:21:56.613787   19727 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:21:56.614121   19727 mustload.go:66] Loading cluster: addons-910183
	I0110 08:21:56.614562   19727 config.go:182] Loaded profile config "addons-910183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:21:56.614582   19727 addons.go:622] checking whether the cluster is paused
	I0110 08:21:56.614710   19727 config.go:182] Loaded profile config "addons-910183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:21:56.614727   19727 host.go:66] Checking if "addons-910183" exists ...
	I0110 08:21:56.615264   19727 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Status}}
	I0110 08:21:56.632788   19727 ssh_runner.go:195] Run: systemctl --version
	I0110 08:21:56.632835   19727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:21:56.650332   19727 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa Username:docker}
	I0110 08:21:56.741353   19727 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 08:21:56.741443   19727 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 08:21:56.774061   19727 cri.go:96] found id: "52e1025c4414838af6827ca4e9af267b60d7bab029698821e94b8c8c53b47a02"
	I0110 08:21:56.774083   19727 cri.go:96] found id: "75942fc7d47191bb325503f8137ff909131d1c4d52d50d6647a07624163824bf"
	I0110 08:21:56.774099   19727 cri.go:96] found id: "1f0e0366214d13effb5e1a2fdd2281ce1ab7895177c2879232a660b16f2f36f6"
	I0110 08:21:56.774115   19727 cri.go:96] found id: "ad41fe8eb72ea17b3787fd8b9148b6f74bcbae10876696c1004d2af9067e974a"
	I0110 08:21:56.774121   19727 cri.go:96] found id: "b8196c6709973cd145e81bee513c9295c75a07f339edd1849b8f09eeca3a8902"
	I0110 08:21:56.774127   19727 cri.go:96] found id: "e283b96a0148a542df9ef8145310402e66db65dde0eeb3d91a74bffd800c43b6"
	I0110 08:21:56.774132   19727 cri.go:96] found id: "a3befa7150ca579269272406feaeddcf602e63b5aa269f6a4eb71f0400cf9f65"
	I0110 08:21:56.774140   19727 cri.go:96] found id: "ba1fcf19b0afca5f042f84fda8060304660e32ffcb595b679580f3f1681e8b1b"
	I0110 08:21:56.774145   19727 cri.go:96] found id: "e315fa8c4e521b6643d8334e448d62113f1d082439e58e1761e61d37e8c0adab"
	I0110 08:21:56.774180   19727 cri.go:96] found id: "60d2da1df937e39c979adac7502a680da7554008aebe24a90289fa081d48066c"
	I0110 08:21:56.774186   19727 cri.go:96] found id: "a26f210429ce8f81957c43d13ae7dc0c7795a1237b391a18c23dc21ee3dac83a"
	I0110 08:21:56.774189   19727 cri.go:96] found id: "ac0bb3017c2df5407dc70b2065a8642f17f4632f0eb519eae225e58db88cb7d7"
	I0110 08:21:56.774192   19727 cri.go:96] found id: "80376415781ced2a18f3b49817b93eb4bbe7ad36f4493427b2dcf4382ff8817b"
	I0110 08:21:56.774195   19727 cri.go:96] found id: "f180f3ed8bb6409a8995534d80f2ad8dbca483a62f67c39d69e4883e5841f7b7"
	I0110 08:21:56.774198   19727 cri.go:96] found id: "2149b85c56959d7cc9fe04d1ce74acdf5e9e313ff0eb11e6bab9ca79b3973900"
	I0110 08:21:56.774205   19727 cri.go:96] found id: "82bfb461b79f28c44b698878be57d4f3ed705a492181cc1e2b299581cc8e19c3"
	I0110 08:21:56.774208   19727 cri.go:96] found id: "800ef08a84703f203b5e10eeaa60cc96e4e0b501f7116ab326aec663a8d44a0a"
	I0110 08:21:56.774212   19727 cri.go:96] found id: "cc65f89692fad8f8f11138c2f42e6bc16e264609f7cf63fd4ef0448bcfbcd8a9"
	I0110 08:21:56.774215   19727 cri.go:96] found id: "c98446ea3c7dfb0a00df0216bc2d759bfb2abd9423c844e37dd879a9f1842ce2"
	I0110 08:21:56.774218   19727 cri.go:96] found id: "5f8174e666e70787ef47924deb832bd181b1187a4f93adde2719afd222583233"
	I0110 08:21:56.774223   19727 cri.go:96] found id: "273ea06f61975e2f11258602b200a4bc83fa37bbfdf217550f9a076abce1b87f"
	I0110 08:21:56.774228   19727 cri.go:96] found id: "5394108065d3cfd7430eab4005c30e45dbab412c8e88c64087470bc4bdf68d94"
	I0110 08:21:56.774230   19727 cri.go:96] found id: "b80c4916fca961561a4c4f5172bd77ec246690c99788e26501b0d53f2be91145"
	I0110 08:21:56.774233   19727 cri.go:96] found id: "f6fa6e8d4ac06e107cc11e058aaccfc1f58cc2d3bde427b80777917f1c523209"
	I0110 08:21:56.774236   19727 cri.go:96] found id: ""
	I0110 08:21:56.774272   19727 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 08:21:56.788587   19727 out.go:203] 
	W0110 08:21:56.790135   19727 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:21:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:21:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 08:21:56.790166   19727 out.go:285] * 
	* 
	W0110 08:21:56.790892   19727 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 08:21:56.792143   19727 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-910183 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.39s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (10.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-910183 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-910183 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-910183 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [8a46fe0b-fd65-47d1-ae28-9a56c0334cd7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [8a46fe0b-fd65-47d1-ae28-9a56c0334cd7] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003479383s
I0110 08:22:02.674173    7183 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-910183 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-910183 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-910183 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-910183 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-910183 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (252.54116ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:22:03.481413   20640 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:22:03.481705   20640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:22:03.481716   20640 out.go:374] Setting ErrFile to fd 2...
	I0110 08:22:03.481720   20640 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:22:03.481934   20640 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:22:03.482204   20640 mustload.go:66] Loading cluster: addons-910183
	I0110 08:22:03.482620   20640 config.go:182] Loaded profile config "addons-910183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:22:03.482639   20640 addons.go:622] checking whether the cluster is paused
	I0110 08:22:03.482783   20640 config.go:182] Loaded profile config "addons-910183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:22:03.482800   20640 host.go:66] Checking if "addons-910183" exists ...
	I0110 08:22:03.483293   20640 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Status}}
	I0110 08:22:03.502117   20640 ssh_runner.go:195] Run: systemctl --version
	I0110 08:22:03.502176   20640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:22:03.519165   20640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa Username:docker}
	I0110 08:22:03.618320   20640 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 08:22:03.618400   20640 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 08:22:03.650878   20640 cri.go:96] found id: "aebe01af9fa8d6d47cef32b601f05c075ceb41b127c57b221c0216042caeb945"
	I0110 08:22:03.650905   20640 cri.go:96] found id: "52e1025c4414838af6827ca4e9af267b60d7bab029698821e94b8c8c53b47a02"
	I0110 08:22:03.650911   20640 cri.go:96] found id: "75942fc7d47191bb325503f8137ff909131d1c4d52d50d6647a07624163824bf"
	I0110 08:22:03.650916   20640 cri.go:96] found id: "1f0e0366214d13effb5e1a2fdd2281ce1ab7895177c2879232a660b16f2f36f6"
	I0110 08:22:03.650920   20640 cri.go:96] found id: "ad41fe8eb72ea17b3787fd8b9148b6f74bcbae10876696c1004d2af9067e974a"
	I0110 08:22:03.650926   20640 cri.go:96] found id: "b8196c6709973cd145e81bee513c9295c75a07f339edd1849b8f09eeca3a8902"
	I0110 08:22:03.650931   20640 cri.go:96] found id: "e283b96a0148a542df9ef8145310402e66db65dde0eeb3d91a74bffd800c43b6"
	I0110 08:22:03.650936   20640 cri.go:96] found id: "a3befa7150ca579269272406feaeddcf602e63b5aa269f6a4eb71f0400cf9f65"
	I0110 08:22:03.650942   20640 cri.go:96] found id: "ba1fcf19b0afca5f042f84fda8060304660e32ffcb595b679580f3f1681e8b1b"
	I0110 08:22:03.650949   20640 cri.go:96] found id: "e315fa8c4e521b6643d8334e448d62113f1d082439e58e1761e61d37e8c0adab"
	I0110 08:22:03.650960   20640 cri.go:96] found id: "60d2da1df937e39c979adac7502a680da7554008aebe24a90289fa081d48066c"
	I0110 08:22:03.650965   20640 cri.go:96] found id: "a26f210429ce8f81957c43d13ae7dc0c7795a1237b391a18c23dc21ee3dac83a"
	I0110 08:22:03.650969   20640 cri.go:96] found id: "ac0bb3017c2df5407dc70b2065a8642f17f4632f0eb519eae225e58db88cb7d7"
	I0110 08:22:03.650982   20640 cri.go:96] found id: "80376415781ced2a18f3b49817b93eb4bbe7ad36f4493427b2dcf4382ff8817b"
	I0110 08:22:03.650990   20640 cri.go:96] found id: "f180f3ed8bb6409a8995534d80f2ad8dbca483a62f67c39d69e4883e5841f7b7"
	I0110 08:22:03.650997   20640 cri.go:96] found id: "2149b85c56959d7cc9fe04d1ce74acdf5e9e313ff0eb11e6bab9ca79b3973900"
	I0110 08:22:03.651002   20640 cri.go:96] found id: "82bfb461b79f28c44b698878be57d4f3ed705a492181cc1e2b299581cc8e19c3"
	I0110 08:22:03.651007   20640 cri.go:96] found id: "800ef08a84703f203b5e10eeaa60cc96e4e0b501f7116ab326aec663a8d44a0a"
	I0110 08:22:03.651012   20640 cri.go:96] found id: "cc65f89692fad8f8f11138c2f42e6bc16e264609f7cf63fd4ef0448bcfbcd8a9"
	I0110 08:22:03.651016   20640 cri.go:96] found id: "c98446ea3c7dfb0a00df0216bc2d759bfb2abd9423c844e37dd879a9f1842ce2"
	I0110 08:22:03.651022   20640 cri.go:96] found id: "5f8174e666e70787ef47924deb832bd181b1187a4f93adde2719afd222583233"
	I0110 08:22:03.651031   20640 cri.go:96] found id: "273ea06f61975e2f11258602b200a4bc83fa37bbfdf217550f9a076abce1b87f"
	I0110 08:22:03.651036   20640 cri.go:96] found id: "5394108065d3cfd7430eab4005c30e45dbab412c8e88c64087470bc4bdf68d94"
	I0110 08:22:03.651044   20640 cri.go:96] found id: "b80c4916fca961561a4c4f5172bd77ec246690c99788e26501b0d53f2be91145"
	I0110 08:22:03.651050   20640 cri.go:96] found id: "f6fa6e8d4ac06e107cc11e058aaccfc1f58cc2d3bde427b80777917f1c523209"
	I0110 08:22:03.651054   20640 cri.go:96] found id: ""
	I0110 08:22:03.651104   20640 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 08:22:03.671217   20640 out.go:203] 
	W0110 08:22:03.672526   20640 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:22:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:22:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 08:22:03.672551   20640 out.go:285] * 
	* 
	W0110 08:22:03.673584   20640 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 08:22:03.674766   20640 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-910183 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-910183 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-910183 addons disable ingress --alsologtostderr -v=1: exit status 11 (240.653773ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:22:03.740093   20976 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:22:03.740361   20976 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:22:03.740372   20976 out.go:374] Setting ErrFile to fd 2...
	I0110 08:22:03.740376   20976 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:22:03.740562   20976 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:22:03.740837   20976 mustload.go:66] Loading cluster: addons-910183
	I0110 08:22:03.741158   20976 config.go:182] Loaded profile config "addons-910183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:22:03.741171   20976 addons.go:622] checking whether the cluster is paused
	I0110 08:22:03.741257   20976 config.go:182] Loaded profile config "addons-910183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:22:03.741269   20976 host.go:66] Checking if "addons-910183" exists ...
	I0110 08:22:03.741614   20976 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Status}}
	I0110 08:22:03.761599   20976 ssh_runner.go:195] Run: systemctl --version
	I0110 08:22:03.761671   20976 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:22:03.778976   20976 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa Username:docker}
	I0110 08:22:03.870764   20976 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 08:22:03.870886   20976 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 08:22:03.901482   20976 cri.go:96] found id: "aebe01af9fa8d6d47cef32b601f05c075ceb41b127c57b221c0216042caeb945"
	I0110 08:22:03.901502   20976 cri.go:96] found id: "52e1025c4414838af6827ca4e9af267b60d7bab029698821e94b8c8c53b47a02"
	I0110 08:22:03.901506   20976 cri.go:96] found id: "75942fc7d47191bb325503f8137ff909131d1c4d52d50d6647a07624163824bf"
	I0110 08:22:03.901509   20976 cri.go:96] found id: "1f0e0366214d13effb5e1a2fdd2281ce1ab7895177c2879232a660b16f2f36f6"
	I0110 08:22:03.901512   20976 cri.go:96] found id: "ad41fe8eb72ea17b3787fd8b9148b6f74bcbae10876696c1004d2af9067e974a"
	I0110 08:22:03.901516   20976 cri.go:96] found id: "b8196c6709973cd145e81bee513c9295c75a07f339edd1849b8f09eeca3a8902"
	I0110 08:22:03.901518   20976 cri.go:96] found id: "e283b96a0148a542df9ef8145310402e66db65dde0eeb3d91a74bffd800c43b6"
	I0110 08:22:03.901521   20976 cri.go:96] found id: "a3befa7150ca579269272406feaeddcf602e63b5aa269f6a4eb71f0400cf9f65"
	I0110 08:22:03.901524   20976 cri.go:96] found id: "ba1fcf19b0afca5f042f84fda8060304660e32ffcb595b679580f3f1681e8b1b"
	I0110 08:22:03.901529   20976 cri.go:96] found id: "e315fa8c4e521b6643d8334e448d62113f1d082439e58e1761e61d37e8c0adab"
	I0110 08:22:03.901532   20976 cri.go:96] found id: "60d2da1df937e39c979adac7502a680da7554008aebe24a90289fa081d48066c"
	I0110 08:22:03.901535   20976 cri.go:96] found id: "a26f210429ce8f81957c43d13ae7dc0c7795a1237b391a18c23dc21ee3dac83a"
	I0110 08:22:03.901537   20976 cri.go:96] found id: "ac0bb3017c2df5407dc70b2065a8642f17f4632f0eb519eae225e58db88cb7d7"
	I0110 08:22:03.901541   20976 cri.go:96] found id: "80376415781ced2a18f3b49817b93eb4bbe7ad36f4493427b2dcf4382ff8817b"
	I0110 08:22:03.901545   20976 cri.go:96] found id: "f180f3ed8bb6409a8995534d80f2ad8dbca483a62f67c39d69e4883e5841f7b7"
	I0110 08:22:03.901549   20976 cri.go:96] found id: "2149b85c56959d7cc9fe04d1ce74acdf5e9e313ff0eb11e6bab9ca79b3973900"
	I0110 08:22:03.901552   20976 cri.go:96] found id: "82bfb461b79f28c44b698878be57d4f3ed705a492181cc1e2b299581cc8e19c3"
	I0110 08:22:03.901555   20976 cri.go:96] found id: "800ef08a84703f203b5e10eeaa60cc96e4e0b501f7116ab326aec663a8d44a0a"
	I0110 08:22:03.901558   20976 cri.go:96] found id: "cc65f89692fad8f8f11138c2f42e6bc16e264609f7cf63fd4ef0448bcfbcd8a9"
	I0110 08:22:03.901561   20976 cri.go:96] found id: "c98446ea3c7dfb0a00df0216bc2d759bfb2abd9423c844e37dd879a9f1842ce2"
	I0110 08:22:03.901564   20976 cri.go:96] found id: "5f8174e666e70787ef47924deb832bd181b1187a4f93adde2719afd222583233"
	I0110 08:22:03.901566   20976 cri.go:96] found id: "273ea06f61975e2f11258602b200a4bc83fa37bbfdf217550f9a076abce1b87f"
	I0110 08:22:03.901569   20976 cri.go:96] found id: "5394108065d3cfd7430eab4005c30e45dbab412c8e88c64087470bc4bdf68d94"
	I0110 08:22:03.901572   20976 cri.go:96] found id: "b80c4916fca961561a4c4f5172bd77ec246690c99788e26501b0d53f2be91145"
	I0110 08:22:03.901575   20976 cri.go:96] found id: "f6fa6e8d4ac06e107cc11e058aaccfc1f58cc2d3bde427b80777917f1c523209"
	I0110 08:22:03.901577   20976 cri.go:96] found id: ""
	I0110 08:22:03.901614   20976 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 08:22:03.915751   20976 out.go:203] 
	W0110 08:22:03.917098   20976 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:22:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:22:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 08:22:03.917129   20976 out.go:285] * 
	* 
	W0110 08:22:03.917819   20976 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 08:22:03.919046   20976 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-910183 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (10.78s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.24s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-kjbpw" [e05d050e-fad2-4867-b169-395c8b67f2f2] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00378844s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-910183 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-910183 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (233.930911ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:21:59.240066   20117 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:21:59.240319   20117 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:21:59.240328   20117 out.go:374] Setting ErrFile to fd 2...
	I0110 08:21:59.240331   20117 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:21:59.240540   20117 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:21:59.240800   20117 mustload.go:66] Loading cluster: addons-910183
	I0110 08:21:59.241104   20117 config.go:182] Loaded profile config "addons-910183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:21:59.241126   20117 addons.go:622] checking whether the cluster is paused
	I0110 08:21:59.241206   20117 config.go:182] Loaded profile config "addons-910183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:21:59.241216   20117 host.go:66] Checking if "addons-910183" exists ...
	I0110 08:21:59.241585   20117 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Status}}
	I0110 08:21:59.259335   20117 ssh_runner.go:195] Run: systemctl --version
	I0110 08:21:59.259376   20117 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:21:59.278020   20117 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa Username:docker}
	I0110 08:21:59.370131   20117 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 08:21:59.370218   20117 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 08:21:59.400447   20117 cri.go:96] found id: "52e1025c4414838af6827ca4e9af267b60d7bab029698821e94b8c8c53b47a02"
	I0110 08:21:59.400464   20117 cri.go:96] found id: "75942fc7d47191bb325503f8137ff909131d1c4d52d50d6647a07624163824bf"
	I0110 08:21:59.400468   20117 cri.go:96] found id: "1f0e0366214d13effb5e1a2fdd2281ce1ab7895177c2879232a660b16f2f36f6"
	I0110 08:21:59.400471   20117 cri.go:96] found id: "ad41fe8eb72ea17b3787fd8b9148b6f74bcbae10876696c1004d2af9067e974a"
	I0110 08:21:59.400474   20117 cri.go:96] found id: "b8196c6709973cd145e81bee513c9295c75a07f339edd1849b8f09eeca3a8902"
	I0110 08:21:59.400477   20117 cri.go:96] found id: "e283b96a0148a542df9ef8145310402e66db65dde0eeb3d91a74bffd800c43b6"
	I0110 08:21:59.400479   20117 cri.go:96] found id: "a3befa7150ca579269272406feaeddcf602e63b5aa269f6a4eb71f0400cf9f65"
	I0110 08:21:59.400482   20117 cri.go:96] found id: "ba1fcf19b0afca5f042f84fda8060304660e32ffcb595b679580f3f1681e8b1b"
	I0110 08:21:59.400484   20117 cri.go:96] found id: "e315fa8c4e521b6643d8334e448d62113f1d082439e58e1761e61d37e8c0adab"
	I0110 08:21:59.400494   20117 cri.go:96] found id: "60d2da1df937e39c979adac7502a680da7554008aebe24a90289fa081d48066c"
	I0110 08:21:59.400497   20117 cri.go:96] found id: "a26f210429ce8f81957c43d13ae7dc0c7795a1237b391a18c23dc21ee3dac83a"
	I0110 08:21:59.400500   20117 cri.go:96] found id: "ac0bb3017c2df5407dc70b2065a8642f17f4632f0eb519eae225e58db88cb7d7"
	I0110 08:21:59.400503   20117 cri.go:96] found id: "80376415781ced2a18f3b49817b93eb4bbe7ad36f4493427b2dcf4382ff8817b"
	I0110 08:21:59.400508   20117 cri.go:96] found id: "f180f3ed8bb6409a8995534d80f2ad8dbca483a62f67c39d69e4883e5841f7b7"
	I0110 08:21:59.400515   20117 cri.go:96] found id: "2149b85c56959d7cc9fe04d1ce74acdf5e9e313ff0eb11e6bab9ca79b3973900"
	I0110 08:21:59.400530   20117 cri.go:96] found id: "82bfb461b79f28c44b698878be57d4f3ed705a492181cc1e2b299581cc8e19c3"
	I0110 08:21:59.400535   20117 cri.go:96] found id: "800ef08a84703f203b5e10eeaa60cc96e4e0b501f7116ab326aec663a8d44a0a"
	I0110 08:21:59.400539   20117 cri.go:96] found id: "cc65f89692fad8f8f11138c2f42e6bc16e264609f7cf63fd4ef0448bcfbcd8a9"
	I0110 08:21:59.400542   20117 cri.go:96] found id: "c98446ea3c7dfb0a00df0216bc2d759bfb2abd9423c844e37dd879a9f1842ce2"
	I0110 08:21:59.400545   20117 cri.go:96] found id: "5f8174e666e70787ef47924deb832bd181b1187a4f93adde2719afd222583233"
	I0110 08:21:59.400548   20117 cri.go:96] found id: "273ea06f61975e2f11258602b200a4bc83fa37bbfdf217550f9a076abce1b87f"
	I0110 08:21:59.400551   20117 cri.go:96] found id: "5394108065d3cfd7430eab4005c30e45dbab412c8e88c64087470bc4bdf68d94"
	I0110 08:21:59.400554   20117 cri.go:96] found id: "b80c4916fca961561a4c4f5172bd77ec246690c99788e26501b0d53f2be91145"
	I0110 08:21:59.400558   20117 cri.go:96] found id: "f6fa6e8d4ac06e107cc11e058aaccfc1f58cc2d3bde427b80777917f1c523209"
	I0110 08:21:59.400560   20117 cri.go:96] found id: ""
	I0110 08:21:59.400597   20117 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 08:21:59.414489   20117 out.go:203] 
	W0110 08:21:59.415782   20117 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:21:59Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:21:59Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 08:21:59.415814   20117 out.go:285] * 
	* 
	W0110 08:21:59.416648   20117 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 08:21:59.417914   20117 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-910183 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.24s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 2.613577ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-5778bb4788-228dp" [15f7e27d-e734-419e-bc25-0a33689452ac] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002333835s
addons_test.go:465: (dbg) Run:  kubectl --context addons-910183 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-910183 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-910183 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (238.635412ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:21:47.749223   18204 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:21:47.749501   18204 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:21:47.749511   18204 out.go:374] Setting ErrFile to fd 2...
	I0110 08:21:47.749515   18204 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:21:47.749726   18204 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:21:47.750086   18204 mustload.go:66] Loading cluster: addons-910183
	I0110 08:21:47.750536   18204 config.go:182] Loaded profile config "addons-910183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:21:47.750556   18204 addons.go:622] checking whether the cluster is paused
	I0110 08:21:47.750682   18204 config.go:182] Loaded profile config "addons-910183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:21:47.750696   18204 host.go:66] Checking if "addons-910183" exists ...
	I0110 08:21:47.751078   18204 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Status}}
	I0110 08:21:47.769213   18204 ssh_runner.go:195] Run: systemctl --version
	I0110 08:21:47.769256   18204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:21:47.786495   18204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa Username:docker}
	I0110 08:21:47.879252   18204 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 08:21:47.879349   18204 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 08:21:47.908324   18204 cri.go:96] found id: "52e1025c4414838af6827ca4e9af267b60d7bab029698821e94b8c8c53b47a02"
	I0110 08:21:47.908342   18204 cri.go:96] found id: "75942fc7d47191bb325503f8137ff909131d1c4d52d50d6647a07624163824bf"
	I0110 08:21:47.908346   18204 cri.go:96] found id: "1f0e0366214d13effb5e1a2fdd2281ce1ab7895177c2879232a660b16f2f36f6"
	I0110 08:21:47.908349   18204 cri.go:96] found id: "ad41fe8eb72ea17b3787fd8b9148b6f74bcbae10876696c1004d2af9067e974a"
	I0110 08:21:47.908351   18204 cri.go:96] found id: "b8196c6709973cd145e81bee513c9295c75a07f339edd1849b8f09eeca3a8902"
	I0110 08:21:47.908355   18204 cri.go:96] found id: "e283b96a0148a542df9ef8145310402e66db65dde0eeb3d91a74bffd800c43b6"
	I0110 08:21:47.908359   18204 cri.go:96] found id: "a3befa7150ca579269272406feaeddcf602e63b5aa269f6a4eb71f0400cf9f65"
	I0110 08:21:47.908367   18204 cri.go:96] found id: "ba1fcf19b0afca5f042f84fda8060304660e32ffcb595b679580f3f1681e8b1b"
	I0110 08:21:47.908370   18204 cri.go:96] found id: "e315fa8c4e521b6643d8334e448d62113f1d082439e58e1761e61d37e8c0adab"
	I0110 08:21:47.908375   18204 cri.go:96] found id: "60d2da1df937e39c979adac7502a680da7554008aebe24a90289fa081d48066c"
	I0110 08:21:47.908378   18204 cri.go:96] found id: "a26f210429ce8f81957c43d13ae7dc0c7795a1237b391a18c23dc21ee3dac83a"
	I0110 08:21:47.908381   18204 cri.go:96] found id: "ac0bb3017c2df5407dc70b2065a8642f17f4632f0eb519eae225e58db88cb7d7"
	I0110 08:21:47.908384   18204 cri.go:96] found id: "80376415781ced2a18f3b49817b93eb4bbe7ad36f4493427b2dcf4382ff8817b"
	I0110 08:21:47.908387   18204 cri.go:96] found id: "f180f3ed8bb6409a8995534d80f2ad8dbca483a62f67c39d69e4883e5841f7b7"
	I0110 08:21:47.908390   18204 cri.go:96] found id: "2149b85c56959d7cc9fe04d1ce74acdf5e9e313ff0eb11e6bab9ca79b3973900"
	I0110 08:21:47.908395   18204 cri.go:96] found id: "82bfb461b79f28c44b698878be57d4f3ed705a492181cc1e2b299581cc8e19c3"
	I0110 08:21:47.908398   18204 cri.go:96] found id: "800ef08a84703f203b5e10eeaa60cc96e4e0b501f7116ab326aec663a8d44a0a"
	I0110 08:21:47.908401   18204 cri.go:96] found id: "cc65f89692fad8f8f11138c2f42e6bc16e264609f7cf63fd4ef0448bcfbcd8a9"
	I0110 08:21:47.908403   18204 cri.go:96] found id: "c98446ea3c7dfb0a00df0216bc2d759bfb2abd9423c844e37dd879a9f1842ce2"
	I0110 08:21:47.908405   18204 cri.go:96] found id: "5f8174e666e70787ef47924deb832bd181b1187a4f93adde2719afd222583233"
	I0110 08:21:47.908408   18204 cri.go:96] found id: "273ea06f61975e2f11258602b200a4bc83fa37bbfdf217550f9a076abce1b87f"
	I0110 08:21:47.908410   18204 cri.go:96] found id: "5394108065d3cfd7430eab4005c30e45dbab412c8e88c64087470bc4bdf68d94"
	I0110 08:21:47.908413   18204 cri.go:96] found id: "b80c4916fca961561a4c4f5172bd77ec246690c99788e26501b0d53f2be91145"
	I0110 08:21:47.908416   18204 cri.go:96] found id: "f6fa6e8d4ac06e107cc11e058aaccfc1f58cc2d3bde427b80777917f1c523209"
	I0110 08:21:47.908419   18204 cri.go:96] found id: ""
	I0110 08:21:47.908451   18204 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 08:21:47.921593   18204 out.go:203] 
	W0110 08:21:47.922793   18204 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:21:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:21:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 08:21:47.922817   18204 out.go:285] * 
	* 
	W0110 08:21:47.923467   18204 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 08:21:47.924803   18204 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-910183 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.30s)

                                                
                                    
x
+
TestAddons/parallel/CSI (40.99s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0110 08:21:47.932332    7183 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0110 08:21:47.935458    7183 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0110 08:21:47.935488    7183 kapi.go:107] duration metric: took 3.167704ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 3.179146ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-910183 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910183 get pvc hpvc -o jsonpath={.status.phase} -n default
2026/01/10 08:21:56 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910183 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910183 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-910183 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [32d6d311-ecf0-428e-b4f5-4742f5259abd] Pending
helpers_test.go:353: "task-pv-pod" [32d6d311-ecf0-428e-b4f5-4742f5259abd] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 6.003950795s
addons_test.go:574: (dbg) Run:  kubectl --context addons-910183 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-910183 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-910183 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-910183 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-910183 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-910183 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910183 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-910183 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [45a13eb6-4d1f-498e-b4d0-5a204fde2d69] Pending
helpers_test.go:353: "task-pv-pod-restore" [45a13eb6-4d1f-498e-b4d0-5a204fde2d69] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 6.0037198s
addons_test.go:616: (dbg) Run:  kubectl --context addons-910183 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-910183 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-910183 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-910183 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-910183 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (230.690723ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:22:28.499872   21773 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:22:28.500002   21773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:22:28.500011   21773 out.go:374] Setting ErrFile to fd 2...
	I0110 08:22:28.500015   21773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:22:28.500219   21773 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:22:28.500466   21773 mustload.go:66] Loading cluster: addons-910183
	I0110 08:22:28.500769   21773 config.go:182] Loaded profile config "addons-910183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:22:28.500787   21773 addons.go:622] checking whether the cluster is paused
	I0110 08:22:28.500866   21773 config.go:182] Loaded profile config "addons-910183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:22:28.500877   21773 host.go:66] Checking if "addons-910183" exists ...
	I0110 08:22:28.501241   21773 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Status}}
	I0110 08:22:28.519352   21773 ssh_runner.go:195] Run: systemctl --version
	I0110 08:22:28.519408   21773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:22:28.537227   21773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa Username:docker}
	I0110 08:22:28.629209   21773 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 08:22:28.629282   21773 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 08:22:28.658117   21773 cri.go:96] found id: "aebe01af9fa8d6d47cef32b601f05c075ceb41b127c57b221c0216042caeb945"
	I0110 08:22:28.658145   21773 cri.go:96] found id: "52e1025c4414838af6827ca4e9af267b60d7bab029698821e94b8c8c53b47a02"
	I0110 08:22:28.658150   21773 cri.go:96] found id: "75942fc7d47191bb325503f8137ff909131d1c4d52d50d6647a07624163824bf"
	I0110 08:22:28.658154   21773 cri.go:96] found id: "1f0e0366214d13effb5e1a2fdd2281ce1ab7895177c2879232a660b16f2f36f6"
	I0110 08:22:28.658159   21773 cri.go:96] found id: "ad41fe8eb72ea17b3787fd8b9148b6f74bcbae10876696c1004d2af9067e974a"
	I0110 08:22:28.658165   21773 cri.go:96] found id: "b8196c6709973cd145e81bee513c9295c75a07f339edd1849b8f09eeca3a8902"
	I0110 08:22:28.658169   21773 cri.go:96] found id: "e283b96a0148a542df9ef8145310402e66db65dde0eeb3d91a74bffd800c43b6"
	I0110 08:22:28.658173   21773 cri.go:96] found id: "a3befa7150ca579269272406feaeddcf602e63b5aa269f6a4eb71f0400cf9f65"
	I0110 08:22:28.658177   21773 cri.go:96] found id: "ba1fcf19b0afca5f042f84fda8060304660e32ffcb595b679580f3f1681e8b1b"
	I0110 08:22:28.658198   21773 cri.go:96] found id: "e315fa8c4e521b6643d8334e448d62113f1d082439e58e1761e61d37e8c0adab"
	I0110 08:22:28.658204   21773 cri.go:96] found id: "60d2da1df937e39c979adac7502a680da7554008aebe24a90289fa081d48066c"
	I0110 08:22:28.658209   21773 cri.go:96] found id: "a26f210429ce8f81957c43d13ae7dc0c7795a1237b391a18c23dc21ee3dac83a"
	I0110 08:22:28.658215   21773 cri.go:96] found id: "ac0bb3017c2df5407dc70b2065a8642f17f4632f0eb519eae225e58db88cb7d7"
	I0110 08:22:28.658223   21773 cri.go:96] found id: "80376415781ced2a18f3b49817b93eb4bbe7ad36f4493427b2dcf4382ff8817b"
	I0110 08:22:28.658229   21773 cri.go:96] found id: "f180f3ed8bb6409a8995534d80f2ad8dbca483a62f67c39d69e4883e5841f7b7"
	I0110 08:22:28.658236   21773 cri.go:96] found id: "2149b85c56959d7cc9fe04d1ce74acdf5e9e313ff0eb11e6bab9ca79b3973900"
	I0110 08:22:28.658240   21773 cri.go:96] found id: "82bfb461b79f28c44b698878be57d4f3ed705a492181cc1e2b299581cc8e19c3"
	I0110 08:22:28.658246   21773 cri.go:96] found id: "800ef08a84703f203b5e10eeaa60cc96e4e0b501f7116ab326aec663a8d44a0a"
	I0110 08:22:28.658251   21773 cri.go:96] found id: "cc65f89692fad8f8f11138c2f42e6bc16e264609f7cf63fd4ef0448bcfbcd8a9"
	I0110 08:22:28.658258   21773 cri.go:96] found id: "c98446ea3c7dfb0a00df0216bc2d759bfb2abd9423c844e37dd879a9f1842ce2"
	I0110 08:22:28.658267   21773 cri.go:96] found id: "5f8174e666e70787ef47924deb832bd181b1187a4f93adde2719afd222583233"
	I0110 08:22:28.658275   21773 cri.go:96] found id: "273ea06f61975e2f11258602b200a4bc83fa37bbfdf217550f9a076abce1b87f"
	I0110 08:22:28.658279   21773 cri.go:96] found id: "5394108065d3cfd7430eab4005c30e45dbab412c8e88c64087470bc4bdf68d94"
	I0110 08:22:28.658283   21773 cri.go:96] found id: "b80c4916fca961561a4c4f5172bd77ec246690c99788e26501b0d53f2be91145"
	I0110 08:22:28.658288   21773 cri.go:96] found id: "f6fa6e8d4ac06e107cc11e058aaccfc1f58cc2d3bde427b80777917f1c523209"
	I0110 08:22:28.658293   21773 cri.go:96] found id: ""
	I0110 08:22:28.658342   21773 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 08:22:28.671935   21773 out.go:203] 
	W0110 08:22:28.673156   21773 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:22:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:22:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 08:22:28.673178   21773 out.go:285] * 
	* 
	W0110 08:22:28.674143   21773 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 08:22:28.675293   21773 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-910183 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-910183 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-910183 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (244.18361ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:22:28.734896   21835 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:22:28.735220   21835 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:22:28.735236   21835 out.go:374] Setting ErrFile to fd 2...
	I0110 08:22:28.735243   21835 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:22:28.735449   21835 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:22:28.735712   21835 mustload.go:66] Loading cluster: addons-910183
	I0110 08:22:28.736036   21835 config.go:182] Loaded profile config "addons-910183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:22:28.736050   21835 addons.go:622] checking whether the cluster is paused
	I0110 08:22:28.736123   21835 config.go:182] Loaded profile config "addons-910183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:22:28.736134   21835 host.go:66] Checking if "addons-910183" exists ...
	I0110 08:22:28.736495   21835 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Status}}
	I0110 08:22:28.753547   21835 ssh_runner.go:195] Run: systemctl --version
	I0110 08:22:28.753607   21835 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:22:28.770769   21835 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa Username:docker}
	I0110 08:22:28.863543   21835 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 08:22:28.863663   21835 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 08:22:28.894352   21835 cri.go:96] found id: "aebe01af9fa8d6d47cef32b601f05c075ceb41b127c57b221c0216042caeb945"
	I0110 08:22:28.894378   21835 cri.go:96] found id: "52e1025c4414838af6827ca4e9af267b60d7bab029698821e94b8c8c53b47a02"
	I0110 08:22:28.894383   21835 cri.go:96] found id: "75942fc7d47191bb325503f8137ff909131d1c4d52d50d6647a07624163824bf"
	I0110 08:22:28.894386   21835 cri.go:96] found id: "1f0e0366214d13effb5e1a2fdd2281ce1ab7895177c2879232a660b16f2f36f6"
	I0110 08:22:28.894389   21835 cri.go:96] found id: "ad41fe8eb72ea17b3787fd8b9148b6f74bcbae10876696c1004d2af9067e974a"
	I0110 08:22:28.894393   21835 cri.go:96] found id: "b8196c6709973cd145e81bee513c9295c75a07f339edd1849b8f09eeca3a8902"
	I0110 08:22:28.894397   21835 cri.go:96] found id: "e283b96a0148a542df9ef8145310402e66db65dde0eeb3d91a74bffd800c43b6"
	I0110 08:22:28.894400   21835 cri.go:96] found id: "a3befa7150ca579269272406feaeddcf602e63b5aa269f6a4eb71f0400cf9f65"
	I0110 08:22:28.894402   21835 cri.go:96] found id: "ba1fcf19b0afca5f042f84fda8060304660e32ffcb595b679580f3f1681e8b1b"
	I0110 08:22:28.894408   21835 cri.go:96] found id: "e315fa8c4e521b6643d8334e448d62113f1d082439e58e1761e61d37e8c0adab"
	I0110 08:22:28.894412   21835 cri.go:96] found id: "60d2da1df937e39c979adac7502a680da7554008aebe24a90289fa081d48066c"
	I0110 08:22:28.894415   21835 cri.go:96] found id: "a26f210429ce8f81957c43d13ae7dc0c7795a1237b391a18c23dc21ee3dac83a"
	I0110 08:22:28.894418   21835 cri.go:96] found id: "ac0bb3017c2df5407dc70b2065a8642f17f4632f0eb519eae225e58db88cb7d7"
	I0110 08:22:28.894427   21835 cri.go:96] found id: "80376415781ced2a18f3b49817b93eb4bbe7ad36f4493427b2dcf4382ff8817b"
	I0110 08:22:28.894432   21835 cri.go:96] found id: "f180f3ed8bb6409a8995534d80f2ad8dbca483a62f67c39d69e4883e5841f7b7"
	I0110 08:22:28.894438   21835 cri.go:96] found id: "2149b85c56959d7cc9fe04d1ce74acdf5e9e313ff0eb11e6bab9ca79b3973900"
	I0110 08:22:28.894443   21835 cri.go:96] found id: "82bfb461b79f28c44b698878be57d4f3ed705a492181cc1e2b299581cc8e19c3"
	I0110 08:22:28.894448   21835 cri.go:96] found id: "800ef08a84703f203b5e10eeaa60cc96e4e0b501f7116ab326aec663a8d44a0a"
	I0110 08:22:28.894453   21835 cri.go:96] found id: "cc65f89692fad8f8f11138c2f42e6bc16e264609f7cf63fd4ef0448bcfbcd8a9"
	I0110 08:22:28.894457   21835 cri.go:96] found id: "c98446ea3c7dfb0a00df0216bc2d759bfb2abd9423c844e37dd879a9f1842ce2"
	I0110 08:22:28.894462   21835 cri.go:96] found id: "5f8174e666e70787ef47924deb832bd181b1187a4f93adde2719afd222583233"
	I0110 08:22:28.894466   21835 cri.go:96] found id: "273ea06f61975e2f11258602b200a4bc83fa37bbfdf217550f9a076abce1b87f"
	I0110 08:22:28.894475   21835 cri.go:96] found id: "5394108065d3cfd7430eab4005c30e45dbab412c8e88c64087470bc4bdf68d94"
	I0110 08:22:28.894480   21835 cri.go:96] found id: "b80c4916fca961561a4c4f5172bd77ec246690c99788e26501b0d53f2be91145"
	I0110 08:22:28.894487   21835 cri.go:96] found id: "f6fa6e8d4ac06e107cc11e058aaccfc1f58cc2d3bde427b80777917f1c523209"
	I0110 08:22:28.894489   21835 cri.go:96] found id: ""
	I0110 08:22:28.894538   21835 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 08:22:28.916097   21835 out.go:203] 
	W0110 08:22:28.917405   21835 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:22:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:22:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 08:22:28.917422   21835 out.go:285] * 
	* 
	W0110 08:22:28.918138   21835 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 08:22:28.919290   21835 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-910183 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (40.99s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.46s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-910183 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-910183 --alsologtostderr -v=1: exit status 11 (235.603265ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:21:42.681377   17257 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:21:42.681916   17257 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:21:42.681931   17257 out.go:374] Setting ErrFile to fd 2...
	I0110 08:21:42.681937   17257 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:21:42.682229   17257 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:21:42.682491   17257 mustload.go:66] Loading cluster: addons-910183
	I0110 08:21:42.682792   17257 config.go:182] Loaded profile config "addons-910183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:21:42.682806   17257 addons.go:622] checking whether the cluster is paused
	I0110 08:21:42.682888   17257 config.go:182] Loaded profile config "addons-910183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:21:42.682899   17257 host.go:66] Checking if "addons-910183" exists ...
	I0110 08:21:42.683278   17257 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Status}}
	I0110 08:21:42.700619   17257 ssh_runner.go:195] Run: systemctl --version
	I0110 08:21:42.700673   17257 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:21:42.719207   17257 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa Username:docker}
	I0110 08:21:42.812300   17257 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 08:21:42.812385   17257 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 08:21:42.841904   17257 cri.go:96] found id: "52e1025c4414838af6827ca4e9af267b60d7bab029698821e94b8c8c53b47a02"
	I0110 08:21:42.841931   17257 cri.go:96] found id: "75942fc7d47191bb325503f8137ff909131d1c4d52d50d6647a07624163824bf"
	I0110 08:21:42.841938   17257 cri.go:96] found id: "1f0e0366214d13effb5e1a2fdd2281ce1ab7895177c2879232a660b16f2f36f6"
	I0110 08:21:42.841943   17257 cri.go:96] found id: "ad41fe8eb72ea17b3787fd8b9148b6f74bcbae10876696c1004d2af9067e974a"
	I0110 08:21:42.841948   17257 cri.go:96] found id: "b8196c6709973cd145e81bee513c9295c75a07f339edd1849b8f09eeca3a8902"
	I0110 08:21:42.841952   17257 cri.go:96] found id: "e283b96a0148a542df9ef8145310402e66db65dde0eeb3d91a74bffd800c43b6"
	I0110 08:21:42.841956   17257 cri.go:96] found id: "a3befa7150ca579269272406feaeddcf602e63b5aa269f6a4eb71f0400cf9f65"
	I0110 08:21:42.841958   17257 cri.go:96] found id: "ba1fcf19b0afca5f042f84fda8060304660e32ffcb595b679580f3f1681e8b1b"
	I0110 08:21:42.841961   17257 cri.go:96] found id: "e315fa8c4e521b6643d8334e448d62113f1d082439e58e1761e61d37e8c0adab"
	I0110 08:21:42.841967   17257 cri.go:96] found id: "60d2da1df937e39c979adac7502a680da7554008aebe24a90289fa081d48066c"
	I0110 08:21:42.841970   17257 cri.go:96] found id: "a26f210429ce8f81957c43d13ae7dc0c7795a1237b391a18c23dc21ee3dac83a"
	I0110 08:21:42.841972   17257 cri.go:96] found id: "ac0bb3017c2df5407dc70b2065a8642f17f4632f0eb519eae225e58db88cb7d7"
	I0110 08:21:42.841975   17257 cri.go:96] found id: "80376415781ced2a18f3b49817b93eb4bbe7ad36f4493427b2dcf4382ff8817b"
	I0110 08:21:42.841989   17257 cri.go:96] found id: "f180f3ed8bb6409a8995534d80f2ad8dbca483a62f67c39d69e4883e5841f7b7"
	I0110 08:21:42.841995   17257 cri.go:96] found id: "2149b85c56959d7cc9fe04d1ce74acdf5e9e313ff0eb11e6bab9ca79b3973900"
	I0110 08:21:42.841999   17257 cri.go:96] found id: "82bfb461b79f28c44b698878be57d4f3ed705a492181cc1e2b299581cc8e19c3"
	I0110 08:21:42.842002   17257 cri.go:96] found id: "800ef08a84703f203b5e10eeaa60cc96e4e0b501f7116ab326aec663a8d44a0a"
	I0110 08:21:42.842005   17257 cri.go:96] found id: "cc65f89692fad8f8f11138c2f42e6bc16e264609f7cf63fd4ef0448bcfbcd8a9"
	I0110 08:21:42.842008   17257 cri.go:96] found id: "c98446ea3c7dfb0a00df0216bc2d759bfb2abd9423c844e37dd879a9f1842ce2"
	I0110 08:21:42.842010   17257 cri.go:96] found id: "5f8174e666e70787ef47924deb832bd181b1187a4f93adde2719afd222583233"
	I0110 08:21:42.842013   17257 cri.go:96] found id: "273ea06f61975e2f11258602b200a4bc83fa37bbfdf217550f9a076abce1b87f"
	I0110 08:21:42.842016   17257 cri.go:96] found id: "5394108065d3cfd7430eab4005c30e45dbab412c8e88c64087470bc4bdf68d94"
	I0110 08:21:42.842018   17257 cri.go:96] found id: "b80c4916fca961561a4c4f5172bd77ec246690c99788e26501b0d53f2be91145"
	I0110 08:21:42.842021   17257 cri.go:96] found id: "f6fa6e8d4ac06e107cc11e058aaccfc1f58cc2d3bde427b80777917f1c523209"
	I0110 08:21:42.842024   17257 cri.go:96] found id: ""
	I0110 08:21:42.842061   17257 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 08:21:42.855707   17257 out.go:203] 
	W0110 08:21:42.857160   17257 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:21:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:21:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 08:21:42.857174   17257 out.go:285] * 
	* 
	W0110 08:21:42.857841   17257 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 08:21:42.859304   17257 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-910183 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-910183
helpers_test.go:244: (dbg) docker inspect addons-910183:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "be399e7c0cc7a8c26dec31617a8f0c91efd6fba517fc1e3d729d1025374bfcce",
	        "Created": "2026-01-10T08:20:27.814143717Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 9145,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T08:20:27.845527995Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e8ee619afcf8e9008c4fe3e7f2aba5fdbe7a9f0765053ca7dd53ab0df8ab02a5",
	        "ResolvConfPath": "/var/lib/docker/containers/be399e7c0cc7a8c26dec31617a8f0c91efd6fba517fc1e3d729d1025374bfcce/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/be399e7c0cc7a8c26dec31617a8f0c91efd6fba517fc1e3d729d1025374bfcce/hostname",
	        "HostsPath": "/var/lib/docker/containers/be399e7c0cc7a8c26dec31617a8f0c91efd6fba517fc1e3d729d1025374bfcce/hosts",
	        "LogPath": "/var/lib/docker/containers/be399e7c0cc7a8c26dec31617a8f0c91efd6fba517fc1e3d729d1025374bfcce/be399e7c0cc7a8c26dec31617a8f0c91efd6fba517fc1e3d729d1025374bfcce-json.log",
	        "Name": "/addons-910183",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-910183:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-910183",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "be399e7c0cc7a8c26dec31617a8f0c91efd6fba517fc1e3d729d1025374bfcce",
	                "LowerDir": "/var/lib/docker/overlay2/322b3b2341c11ccf675e89e43ed0a7559d467b36370df76e05f8aea6d56059a5-init/diff:/var/lib/docker/overlay2/97b14eb520192356c991915ce74cd6094e6ef948f398ac688b667ea18491ccde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/322b3b2341c11ccf675e89e43ed0a7559d467b36370df76e05f8aea6d56059a5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/322b3b2341c11ccf675e89e43ed0a7559d467b36370df76e05f8aea6d56059a5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/322b3b2341c11ccf675e89e43ed0a7559d467b36370df76e05f8aea6d56059a5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-910183",
	                "Source": "/var/lib/docker/volumes/addons-910183/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-910183",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-910183",
	                "name.minikube.sigs.k8s.io": "addons-910183",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "cc1328149b5a4ccba07267d500413f46558fda05aa4ae3e99e0a5e3e2a475cd3",
	            "SandboxKey": "/var/run/docker/netns/cc1328149b5a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-910183": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "21eeb763554f867aba7e42d2ba4144701443b2899032ddbc06efb2b30eaf0e12",
	                    "EndpointID": "f23fd7c59cce9f005afafc96ef179728a18805d0c791862b96ae996be098102b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "86:fa:9c:a0:f3:18",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-910183",
	                        "be399e7c0cc7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-910183 -n addons-910183
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-910183 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-910183 logs -n 25: (1.104475769s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-320689 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-320689   │ jenkins │ v1.37.0 │ 10 Jan 26 08:19 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 10 Jan 26 08:19 UTC │ 10 Jan 26 08:19 UTC │
	│ delete  │ -p download-only-320689                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-320689   │ jenkins │ v1.37.0 │ 10 Jan 26 08:19 UTC │ 10 Jan 26 08:19 UTC │
	│ start   │ -o=json --download-only -p download-only-241766 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-241766   │ jenkins │ v1.37.0 │ 10 Jan 26 08:19 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 10 Jan 26 08:20 UTC │ 10 Jan 26 08:20 UTC │
	│ delete  │ -p download-only-241766                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-241766   │ jenkins │ v1.37.0 │ 10 Jan 26 08:20 UTC │ 10 Jan 26 08:20 UTC │
	│ delete  │ -p download-only-320689                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-320689   │ jenkins │ v1.37.0 │ 10 Jan 26 08:20 UTC │ 10 Jan 26 08:20 UTC │
	│ delete  │ -p download-only-241766                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-241766   │ jenkins │ v1.37.0 │ 10 Jan 26 08:20 UTC │ 10 Jan 26 08:20 UTC │
	│ start   │ --download-only -p download-docker-033678 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-033678 │ jenkins │ v1.37.0 │ 10 Jan 26 08:20 UTC │                     │
	│ delete  │ -p download-docker-033678                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-033678 │ jenkins │ v1.37.0 │ 10 Jan 26 08:20 UTC │ 10 Jan 26 08:20 UTC │
	│ start   │ --download-only -p binary-mirror-934346 --alsologtostderr --binary-mirror http://127.0.0.1:36511 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-934346   │ jenkins │ v1.37.0 │ 10 Jan 26 08:20 UTC │                     │
	│ delete  │ -p binary-mirror-934346                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-934346   │ jenkins │ v1.37.0 │ 10 Jan 26 08:20 UTC │ 10 Jan 26 08:20 UTC │
	│ addons  │ enable dashboard -p addons-910183                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-910183          │ jenkins │ v1.37.0 │ 10 Jan 26 08:20 UTC │                     │
	│ addons  │ disable dashboard -p addons-910183                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-910183          │ jenkins │ v1.37.0 │ 10 Jan 26 08:20 UTC │                     │
	│ start   │ -p addons-910183 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-910183          │ jenkins │ v1.37.0 │ 10 Jan 26 08:20 UTC │ 10 Jan 26 08:21 UTC │
	│ addons  │ addons-910183 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-910183          │ jenkins │ v1.37.0 │ 10 Jan 26 08:21 UTC │                     │
	│ addons  │ addons-910183 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-910183          │ jenkins │ v1.37.0 │ 10 Jan 26 08:21 UTC │                     │
	│ addons  │ enable headlamp -p addons-910183 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-910183          │ jenkins │ v1.37.0 │ 10 Jan 26 08:21 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 08:20:04
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 08:20:04.503100    8513 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:20:04.503287    8513 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:20:04.503295    8513 out.go:374] Setting ErrFile to fd 2...
	I0110 08:20:04.503299    8513 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:20:04.503472    8513 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:20:04.504000    8513 out.go:368] Setting JSON to false
	I0110 08:20:04.504743    8513 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":156,"bootTime":1768033048,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 08:20:04.504788    8513 start.go:143] virtualization: kvm guest
	I0110 08:20:04.506637    8513 out.go:179] * [addons-910183] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 08:20:04.507874    8513 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 08:20:04.507887    8513 notify.go:221] Checking for updates...
	I0110 08:20:04.510162    8513 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 08:20:04.511515    8513 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:20:04.512581    8513 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-3641/.minikube
	I0110 08:20:04.513554    8513 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 08:20:04.514617    8513 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 08:20:04.515828    8513 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 08:20:04.538088    8513 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 08:20:04.538282    8513 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:20:04.593240    8513 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2026-01-10 08:20:04.583837125 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:20:04.593340    8513 docker.go:319] overlay module found
	I0110 08:20:04.595674    8513 out.go:179] * Using the docker driver based on user configuration
	I0110 08:20:04.596772    8513 start.go:309] selected driver: docker
	I0110 08:20:04.596785    8513 start.go:928] validating driver "docker" against <nil>
	I0110 08:20:04.596795    8513 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 08:20:04.597306    8513 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:20:04.647840    8513 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:49 SystemTime:2026-01-10 08:20:04.638912921 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:20:04.648022    8513 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 08:20:04.648209    8513 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 08:20:04.649721    8513 out.go:179] * Using Docker driver with root privileges
	I0110 08:20:04.650680    8513 cni.go:84] Creating CNI manager for ""
	I0110 08:20:04.650776    8513 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 08:20:04.650789    8513 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 08:20:04.650842    8513 start.go:353] cluster config:
	{Name:addons-910183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-910183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:20:04.651970    8513 out.go:179] * Starting "addons-910183" primary control-plane node in "addons-910183" cluster
	I0110 08:20:04.652877    8513 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 08:20:04.654014    8513 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 08:20:04.655201    8513 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 08:20:04.655234    8513 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0110 08:20:04.655249    8513 cache.go:65] Caching tarball of preloaded images
	I0110 08:20:04.655319    8513 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 08:20:04.655334    8513 preload.go:251] Found /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0110 08:20:04.655345    8513 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 08:20:04.655695    8513 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/config.json ...
	I0110 08:20:04.655723    8513 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/config.json: {Name:mk83f0638cc01e2237a3386b0e9f07803af2d5ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:20:04.671301    8513 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 to local cache
	I0110 08:20:04.671445    8513 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local cache directory
	I0110 08:20:04.671468    8513 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local cache directory, skipping pull
	I0110 08:20:04.671474    8513 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in cache, skipping pull
	I0110 08:20:04.671483    8513 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 as a tarball
	I0110 08:20:04.671490    8513 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 from local cache
	I0110 08:20:17.400377    8513 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 from cached tarball
	I0110 08:20:17.400412    8513 cache.go:243] Successfully downloaded all kic artifacts
	I0110 08:20:17.400448    8513 start.go:360] acquireMachinesLock for addons-910183: {Name:mke8c7f6d7c2f28c0fb3f97d9423613baa7f80f6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 08:20:17.400540    8513 start.go:364] duration metric: took 74.308µs to acquireMachinesLock for "addons-910183"
	I0110 08:20:17.400561    8513 start.go:93] Provisioning new machine with config: &{Name:addons-910183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-910183 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 08:20:17.400626    8513 start.go:125] createHost starting for "" (driver="docker")
	I0110 08:20:17.402327    8513 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0110 08:20:17.402520    8513 start.go:159] libmachine.API.Create for "addons-910183" (driver="docker")
	I0110 08:20:17.402546    8513 client.go:173] LocalClient.Create starting
	I0110 08:20:17.402652    8513 main.go:144] libmachine: Creating CA: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem
	I0110 08:20:17.456901    8513 main.go:144] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/cert.pem
	I0110 08:20:17.545270    8513 cli_runner.go:164] Run: docker network inspect addons-910183 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 08:20:17.563109    8513 cli_runner.go:211] docker network inspect addons-910183 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 08:20:17.563182    8513 network_create.go:284] running [docker network inspect addons-910183] to gather additional debugging logs...
	I0110 08:20:17.563199    8513 cli_runner.go:164] Run: docker network inspect addons-910183
	W0110 08:20:17.578496    8513 cli_runner.go:211] docker network inspect addons-910183 returned with exit code 1
	I0110 08:20:17.578520    8513 network_create.go:287] error running [docker network inspect addons-910183]: docker network inspect addons-910183: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-910183 not found
	I0110 08:20:17.578531    8513 network_create.go:289] output of [docker network inspect addons-910183]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-910183 not found
	
	** /stderr **
	I0110 08:20:17.578628    8513 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 08:20:17.595481    8513 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e1c6b0}
	I0110 08:20:17.595521    8513 network_create.go:124] attempt to create docker network addons-910183 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0110 08:20:17.595576    8513 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-910183 addons-910183
	I0110 08:20:17.640747    8513 network_create.go:108] docker network addons-910183 192.168.49.0/24 created
	I0110 08:20:17.640780    8513 kic.go:121] calculated static IP "192.168.49.2" for the "addons-910183" container
	I0110 08:20:17.640830    8513 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 08:20:17.656987    8513 cli_runner.go:164] Run: docker volume create addons-910183 --label name.minikube.sigs.k8s.io=addons-910183 --label created_by.minikube.sigs.k8s.io=true
	I0110 08:20:17.674428    8513 oci.go:103] Successfully created a docker volume addons-910183
	I0110 08:20:17.674486    8513 cli_runner.go:164] Run: docker run --rm --name addons-910183-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-910183 --entrypoint /usr/bin/test -v addons-910183:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 08:20:24.063121    8513 cli_runner.go:217] Completed: docker run --rm --name addons-910183-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-910183 --entrypoint /usr/bin/test -v addons-910183:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib: (6.388604734s)
	I0110 08:20:24.063156    8513 oci.go:107] Successfully prepared a docker volume addons-910183
	I0110 08:20:24.063223    8513 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 08:20:24.063240    8513 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 08:20:24.063305    8513 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-910183:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 08:20:27.746131    8513 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-910183:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (3.682790524s)
	I0110 08:20:27.746158    8513 kic.go:203] duration metric: took 3.682915594s to extract preloaded images to volume ...
	W0110 08:20:27.746252    8513 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0110 08:20:27.746284    8513 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0110 08:20:27.746318    8513 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 08:20:27.799154    8513 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-910183 --name addons-910183 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-910183 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-910183 --network addons-910183 --ip 192.168.49.2 --volume addons-910183:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	I0110 08:20:28.085177    8513 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Running}}
	I0110 08:20:28.104044    8513 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Status}}
	I0110 08:20:28.122102    8513 cli_runner.go:164] Run: docker exec addons-910183 stat /var/lib/dpkg/alternatives/iptables
	I0110 08:20:28.175958    8513 oci.go:144] the created container "addons-910183" has a running status.
	I0110 08:20:28.176008    8513 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa...
	I0110 08:20:28.231871    8513 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0110 08:20:28.257718    8513 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Status}}
	I0110 08:20:28.274914    8513 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0110 08:20:28.274934    8513 kic_runner.go:114] Args: [docker exec --privileged addons-910183 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0110 08:20:28.346004    8513 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Status}}
	I0110 08:20:28.371073    8513 machine.go:94] provisionDockerMachine start ...
	I0110 08:20:28.371170    8513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:20:28.394172    8513 main.go:144] libmachine: Using SSH client type: native
	I0110 08:20:28.394405    8513 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0110 08:20:28.394421    8513 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 08:20:28.529881    8513 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-910183
	
	I0110 08:20:28.529909    8513 ubuntu.go:182] provisioning hostname "addons-910183"
	I0110 08:20:28.529975    8513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:20:28.548039    8513 main.go:144] libmachine: Using SSH client type: native
	I0110 08:20:28.548338    8513 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0110 08:20:28.548364    8513 main.go:144] libmachine: About to run SSH command:
	sudo hostname addons-910183 && echo "addons-910183" | sudo tee /etc/hostname
	I0110 08:20:28.683488    8513 main.go:144] libmachine: SSH cmd err, output: <nil>: addons-910183
	
	I0110 08:20:28.683590    8513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:20:28.701064    8513 main.go:144] libmachine: Using SSH client type: native
	I0110 08:20:28.701269    8513 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0110 08:20:28.701283    8513 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-910183' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-910183/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-910183' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 08:20:28.829215    8513 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 08:20:28.829246    8513 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-3641/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-3641/.minikube}
	I0110 08:20:28.829282    8513 ubuntu.go:190] setting up certificates
	I0110 08:20:28.829299    8513 provision.go:84] configureAuth start
	I0110 08:20:28.829343    8513 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-910183
	I0110 08:20:28.848369    8513 provision.go:143] copyHostCerts
	I0110 08:20:28.848434    8513 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-3641/.minikube/cert.pem (1123 bytes)
	I0110 08:20:28.848547    8513 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-3641/.minikube/key.pem (1675 bytes)
	I0110 08:20:28.848622    8513 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-3641/.minikube/ca.pem (1078 bytes)
	I0110 08:20:28.848718    8513 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-3641/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca-key.pem org=jenkins.addons-910183 san=[127.0.0.1 192.168.49.2 addons-910183 localhost minikube]
	I0110 08:20:28.971914    8513 provision.go:177] copyRemoteCerts
	I0110 08:20:28.972002    8513 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 08:20:28.972036    8513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:20:28.989503    8513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa Username:docker}
	I0110 08:20:29.081413    8513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0110 08:20:29.100129    8513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0110 08:20:29.116607    8513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 08:20:29.132398    8513 provision.go:87] duration metric: took 303.0888ms to configureAuth
	I0110 08:20:29.132420    8513 ubuntu.go:206] setting minikube options for container-runtime
	I0110 08:20:29.132587    8513 config.go:182] Loaded profile config "addons-910183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:20:29.132677    8513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:20:29.150568    8513 main.go:144] libmachine: Using SSH client type: native
	I0110 08:20:29.150853    8513 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0110 08:20:29.150882    8513 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 08:20:29.417501    8513 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 08:20:29.417532    8513 machine.go:97] duration metric: took 1.046431202s to provisionDockerMachine
	I0110 08:20:29.417547    8513 client.go:176] duration metric: took 12.014992293s to LocalClient.Create
	I0110 08:20:29.417567    8513 start.go:167] duration metric: took 12.015047369s to libmachine.API.Create "addons-910183"
	I0110 08:20:29.417574    8513 start.go:293] postStartSetup for "addons-910183" (driver="docker")
	I0110 08:20:29.417583    8513 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 08:20:29.417636    8513 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 08:20:29.417668    8513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:20:29.436115    8513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa Username:docker}
	I0110 08:20:29.528611    8513 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 08:20:29.531965    8513 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 08:20:29.531995    8513 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 08:20:29.532008    8513 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-3641/.minikube/addons for local assets ...
	I0110 08:20:29.532063    8513 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-3641/.minikube/files for local assets ...
	I0110 08:20:29.532088    8513 start.go:296] duration metric: took 114.509151ms for postStartSetup
	I0110 08:20:29.532355    8513 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-910183
	I0110 08:20:29.549710    8513 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/config.json ...
	I0110 08:20:29.549993    8513 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 08:20:29.550041    8513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:20:29.566552    8513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa Username:docker}
	I0110 08:20:29.655589    8513 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 08:20:29.660055    8513 start.go:128] duration metric: took 12.2594176s to createHost
	I0110 08:20:29.660080    8513 start.go:83] releasing machines lock for "addons-910183", held for 12.259529256s
	I0110 08:20:29.660164    8513 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-910183
	I0110 08:20:29.677326    8513 ssh_runner.go:195] Run: cat /version.json
	I0110 08:20:29.677366    8513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:20:29.677390    8513 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 08:20:29.677462    8513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:20:29.695864    8513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa Username:docker}
	I0110 08:20:29.697118    8513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa Username:docker}
	I0110 08:20:29.836574    8513 ssh_runner.go:195] Run: systemctl --version
	I0110 08:20:29.842757    8513 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 08:20:29.875640    8513 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 08:20:29.880131    8513 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 08:20:29.880184    8513 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 08:20:29.904646    8513 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0110 08:20:29.904666    8513 start.go:496] detecting cgroup driver to use...
	I0110 08:20:29.904691    8513 detect.go:178] detected "systemd" cgroup driver on host os
	I0110 08:20:29.904728    8513 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 08:20:29.919798    8513 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 08:20:29.931587    8513 docker.go:218] disabling cri-docker service (if available) ...
	I0110 08:20:29.931632    8513 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 08:20:29.946528    8513 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 08:20:29.962365    8513 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 08:20:30.045591    8513 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 08:20:30.128912    8513 docker.go:234] disabling docker service ...
	I0110 08:20:30.128971    8513 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 08:20:30.146146    8513 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 08:20:30.158161    8513 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 08:20:30.237993    8513 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 08:20:30.315410    8513 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 08:20:30.327015    8513 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 08:20:30.339869    8513 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 08:20:30.339913    8513 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:20:30.349440    8513 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0110 08:20:30.349496    8513 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:20:30.357580    8513 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:20:30.365416    8513 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:20:30.373386    8513 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 08:20:30.380702    8513 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:20:30.388899    8513 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:20:30.400999    8513 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:20:30.408884    8513 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 08:20:30.415522    8513 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0110 08:20:30.415557    8513 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0110 08:20:30.426411    8513 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 08:20:30.433008    8513 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:20:30.507080    8513 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 08:20:30.631994    8513 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 08:20:30.632064    8513 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 08:20:30.636473    8513 start.go:574] Will wait 60s for crictl version
	I0110 08:20:30.636532    8513 ssh_runner.go:195] Run: which crictl
	I0110 08:20:30.639904    8513 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 08:20:30.662430    8513 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 08:20:30.662517    8513 ssh_runner.go:195] Run: crio --version
	I0110 08:20:30.686823    8513 ssh_runner.go:195] Run: crio --version
	I0110 08:20:30.714424    8513 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 08:20:30.716150    8513 cli_runner.go:164] Run: docker network inspect addons-910183 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 08:20:30.732652    8513 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0110 08:20:30.736597    8513 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 08:20:30.746173    8513 kubeadm.go:884] updating cluster {Name:addons-910183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-910183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 08:20:30.746312    8513 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 08:20:30.746373    8513 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 08:20:30.780403    8513 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 08:20:30.780422    8513 crio.go:433] Images already preloaded, skipping extraction
	I0110 08:20:30.780463    8513 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 08:20:30.804810    8513 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 08:20:30.804831    8513 cache_images.go:86] Images are preloaded, skipping loading
	I0110 08:20:30.804838    8513 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.35.0 crio true true} ...
	I0110 08:20:30.804942    8513 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-910183 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:addons-910183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 08:20:30.805029    8513 ssh_runner.go:195] Run: crio config
	I0110 08:20:30.848481    8513 cni.go:84] Creating CNI manager for ""
	I0110 08:20:30.848502    8513 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 08:20:30.848516    8513 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 08:20:30.848534    8513 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-910183 NodeName:addons-910183 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 08:20:30.848633    8513 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-910183"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 08:20:30.848685    8513 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 08:20:30.856586    8513 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 08:20:30.856645    8513 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 08:20:30.864063    8513 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0110 08:20:30.875808    8513 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 08:20:30.889934    8513 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0110 08:20:30.901503    8513 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0110 08:20:30.904704    8513 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 08:20:30.913747    8513 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:20:30.993294    8513 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 08:20:31.013911    8513 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183 for IP: 192.168.49.2
	I0110 08:20:31.013938    8513 certs.go:195] generating shared ca certs ...
	I0110 08:20:31.013959    8513 certs.go:227] acquiring lock for ca certs: {Name:mk00e261408d0e9fd9be39128613c5110a764de2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:20:31.014092    8513 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-3641/.minikube/ca.key
	I0110 08:20:31.213541    8513 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-3641/.minikube/ca.crt ...
	I0110 08:20:31.213575    8513 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/.minikube/ca.crt: {Name:mkb5d80086d2a96be04ef96de7b749ae4194fba1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:20:31.213773    8513 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-3641/.minikube/ca.key ...
	I0110 08:20:31.213789    8513 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/.minikube/ca.key: {Name:mk7cc69d76517149e2f4ca7509d435a446dd45b4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:20:31.213882    8513 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-3641/.minikube/proxy-client-ca.key
	I0110 08:20:31.245779    8513 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-3641/.minikube/proxy-client-ca.crt ...
	I0110 08:20:31.245810    8513 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/.minikube/proxy-client-ca.crt: {Name:mk254c7c4dc63c6b6873c67796e0975cfc5b4966 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:20:31.245985    8513 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-3641/.minikube/proxy-client-ca.key ...
	I0110 08:20:31.245998    8513 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/.minikube/proxy-client-ca.key: {Name:mk22d1a22e7b7e15b03b85e9eedba02807c50cda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:20:31.246073    8513 certs.go:257] generating profile certs ...
	I0110 08:20:31.246131    8513 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/client.key
	I0110 08:20:31.246147    8513 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/client.crt with IP's: []
	I0110 08:20:31.393719    8513 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/client.crt ...
	I0110 08:20:31.393757    8513 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/client.crt: {Name:mk7452348dfdd9158ef308a2828d15db2fe325f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:20:31.393938    8513 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/client.key ...
	I0110 08:20:31.393952    8513 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/client.key: {Name:mk4547d70f1f0290ed7cb9f3cf6d09bd6173ae51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:20:31.394034    8513 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/apiserver.key.58007d27
	I0110 08:20:31.394079    8513 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/apiserver.crt.58007d27 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0110 08:20:31.428727    8513 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/apiserver.crt.58007d27 ...
	I0110 08:20:31.428765    8513 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/apiserver.crt.58007d27: {Name:mk1e4946d10cbc62380d6cec2be7dc395bf3a19c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:20:31.428928    8513 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/apiserver.key.58007d27 ...
	I0110 08:20:31.428943    8513 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/apiserver.key.58007d27: {Name:mkc0c4209c5d55314675db8e1466eef015cee4a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:20:31.429011    8513 certs.go:382] copying /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/apiserver.crt.58007d27 -> /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/apiserver.crt
	I0110 08:20:31.429103    8513 certs.go:386] copying /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/apiserver.key.58007d27 -> /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/apiserver.key
	I0110 08:20:31.429158    8513 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/proxy-client.key
	I0110 08:20:31.429175    8513 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/proxy-client.crt with IP's: []
	I0110 08:20:31.541539    8513 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/proxy-client.crt ...
	I0110 08:20:31.541566    8513 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/proxy-client.crt: {Name:mkfdc06f27eb537402c9b7a0156c436f07901cd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:20:31.541722    8513 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/proxy-client.key ...
	I0110 08:20:31.541742    8513 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/proxy-client.key: {Name:mk4165d4bb3cd43f8881747c8991b21daefcafbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:20:31.541917    8513 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca-key.pem (1675 bytes)
	I0110 08:20:31.541956    8513 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem (1078 bytes)
	I0110 08:20:31.541979    8513 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/cert.pem (1123 bytes)
	I0110 08:20:31.542003    8513 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/key.pem (1675 bytes)
	I0110 08:20:31.542507    8513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 08:20:31.559602    8513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0110 08:20:31.575550    8513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 08:20:31.591136    8513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 08:20:31.606889    8513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0110 08:20:31.622643    8513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 08:20:31.639058    8513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 08:20:31.654463    8513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 08:20:31.669938    8513 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 08:20:31.687580    8513 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 08:20:31.698959    8513 ssh_runner.go:195] Run: openssl version
	I0110 08:20:31.704452    8513 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:20:31.710999    8513 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 08:20:31.719703    8513 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:20:31.723214    8513 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:20:31.723249    8513 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:20:31.756276    8513 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 08:20:31.763319    8513 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I0110 08:20:31.770162    8513 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 08:20:31.773616    8513 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0110 08:20:31.773660    8513 kubeadm.go:401] StartCluster: {Name:addons-910183 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:addons-910183 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:20:31.773744    8513 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 08:20:31.773807    8513 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 08:20:31.799274    8513 cri.go:96] found id: ""
	I0110 08:20:31.799331    8513 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 08:20:31.807413    8513 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0110 08:20:31.814542    8513 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I0110 08:20:31.814589    8513 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0110 08:20:31.821401    8513 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0110 08:20:31.821420    8513 kubeadm.go:158] found existing configuration files:
	
	I0110 08:20:31.821461    8513 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0110 08:20:31.828408    8513 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0110 08:20:31.828449    8513 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0110 08:20:31.834995    8513 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0110 08:20:31.841761    8513 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0110 08:20:31.841811    8513 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0110 08:20:31.848364    8513 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0110 08:20:31.855321    8513 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0110 08:20:31.855360    8513 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0110 08:20:31.862815    8513 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0110 08:20:31.871100    8513 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0110 08:20:31.871147    8513 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0110 08:20:31.878608    8513 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0110 08:20:31.913653    8513 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 08:20:31.913743    8513 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 08:20:31.971094    8513 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 08:20:31.971199    8513 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I0110 08:20:31.971254    8513 kubeadm.go:319] OS: Linux
	I0110 08:20:31.971325    8513 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 08:20:31.971393    8513 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 08:20:31.971441    8513 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 08:20:31.971481    8513 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 08:20:31.971519    8513 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 08:20:31.971564    8513 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 08:20:31.971604    8513 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 08:20:31.971645    8513 kubeadm.go:319] CGROUPS_IO: enabled
	I0110 08:20:32.023716    8513 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 08:20:32.023842    8513 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 08:20:32.023925    8513 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 08:20:32.030310    8513 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 08:20:32.033254    8513 out.go:252]   - Generating certificates and keys ...
	I0110 08:20:32.033357    8513 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 08:20:32.033456    8513 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 08:20:32.157677    8513 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 08:20:32.233824    8513 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 08:20:32.432156    8513 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 08:20:32.474128    8513 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 08:20:32.510430    8513 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 08:20:32.510550    8513 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-910183 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0110 08:20:32.642663    8513 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 08:20:32.642907    8513 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-910183 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0110 08:20:32.751956    8513 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 08:20:32.822575    8513 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 08:20:32.842749    8513 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 08:20:32.842864    8513 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 08:20:33.081815    8513 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 08:20:33.132260    8513 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 08:20:33.182948    8513 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 08:20:33.275158    8513 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 08:20:33.315895    8513 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 08:20:33.316265    8513 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 08:20:33.319893    8513 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 08:20:33.321366    8513 out.go:252]   - Booting up control plane ...
	I0110 08:20:33.321473    8513 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 08:20:33.321573    8513 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 08:20:33.322122    8513 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 08:20:33.350193    8513 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 08:20:33.350317    8513 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 08:20:33.356907    8513 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 08:20:33.357194    8513 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 08:20:33.357243    8513 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 08:20:33.461900    8513 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 08:20:33.462023    8513 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 08:20:33.963614    8513 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.863583ms
	I0110 08:20:33.966366    8513 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0110 08:20:33.966499    8513 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0110 08:20:33.966615    8513 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0110 08:20:33.966694    8513 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0110 08:20:34.471482    8513 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 504.990546ms
	I0110 08:20:35.310172    8513 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.34368469s
	I0110 08:20:36.968200    8513 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.001734223s
	I0110 08:20:36.982774    8513 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0110 08:20:36.991260    8513 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0110 08:20:36.998562    8513 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I0110 08:20:36.998800    8513 kubeadm.go:319] [mark-control-plane] Marking the node addons-910183 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0110 08:20:37.006373    8513 kubeadm.go:319] [bootstrap-token] Using token: 5aneij.q1307soikzhdwfm4
	I0110 08:20:37.007596    8513 out.go:252]   - Configuring RBAC rules ...
	I0110 08:20:37.007753    8513 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0110 08:20:37.010636    8513 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0110 08:20:37.015044    8513 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0110 08:20:37.017197    8513 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0110 08:20:37.019370    8513 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0110 08:20:37.022195    8513 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0110 08:20:37.372416    8513 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0110 08:20:37.788401    8513 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I0110 08:20:38.373211    8513 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I0110 08:20:38.373973    8513 kubeadm.go:319] 
	I0110 08:20:38.374066    8513 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I0110 08:20:38.374075    8513 kubeadm.go:319] 
	I0110 08:20:38.374181    8513 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I0110 08:20:38.374204    8513 kubeadm.go:319] 
	I0110 08:20:38.374244    8513 kubeadm.go:319]   mkdir -p $HOME/.kube
	I0110 08:20:38.374308    8513 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0110 08:20:38.374383    8513 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0110 08:20:38.374393    8513 kubeadm.go:319] 
	I0110 08:20:38.374443    8513 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I0110 08:20:38.374450    8513 kubeadm.go:319] 
	I0110 08:20:38.374490    8513 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0110 08:20:38.374496    8513 kubeadm.go:319] 
	I0110 08:20:38.374542    8513 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I0110 08:20:38.374610    8513 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0110 08:20:38.374672    8513 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0110 08:20:38.374678    8513 kubeadm.go:319] 
	I0110 08:20:38.374776    8513 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I0110 08:20:38.374845    8513 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I0110 08:20:38.374850    8513 kubeadm.go:319] 
	I0110 08:20:38.374926    8513 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 5aneij.q1307soikzhdwfm4 \
	I0110 08:20:38.375047    8513 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f746eb27466bc6381c15f46a92d9a9e5cdeed2008acae9cc29658e7541168248 \
	I0110 08:20:38.375069    8513 kubeadm.go:319] 	--control-plane 
	I0110 08:20:38.375074    8513 kubeadm.go:319] 
	I0110 08:20:38.375193    8513 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I0110 08:20:38.375204    8513 kubeadm.go:319] 
	I0110 08:20:38.375341    8513 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 5aneij.q1307soikzhdwfm4 \
	I0110 08:20:38.375486    8513 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f746eb27466bc6381c15f46a92d9a9e5cdeed2008acae9cc29658e7541168248 
	I0110 08:20:38.377081    8513 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I0110 08:20:38.377206    8513 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0110 08:20:38.377221    8513 cni.go:84] Creating CNI manager for ""
	I0110 08:20:38.377231    8513 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 08:20:38.379255    8513 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0110 08:20:38.380416    8513 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0110 08:20:38.384341    8513 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I0110 08:20:38.384359    8513 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I0110 08:20:38.396868    8513 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0110 08:20:38.592902    8513 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0110 08:20:38.592989    8513 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:20:38.592992    8513 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-910183 minikube.k8s.io/updated_at=2026_01_10T08_20_38_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee minikube.k8s.io/name=addons-910183 minikube.k8s.io/primary=true
	I0110 08:20:38.604926    8513 ops.go:34] apiserver oom_adj: -16
	I0110 08:20:38.658179    8513 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:20:39.159223    8513 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:20:39.659284    8513 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:20:40.158243    8513 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:20:40.658478    8513 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:20:41.159013    8513 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:20:41.658276    8513 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:20:42.159025    8513 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:20:42.658487    8513 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:20:43.159014    8513 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:20:43.219255    8513 kubeadm.go:1114] duration metric: took 4.626320368s to wait for elevateKubeSystemPrivileges
	I0110 08:20:43.219290    8513 kubeadm.go:403] duration metric: took 11.445632426s to StartCluster
	I0110 08:20:43.219312    8513 settings.go:142] acquiring lock: {Name:mkbb32fc6441ceb31ce2923ea8999f8375298f08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:20:43.219431    8513 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:20:43.219783    8513 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/kubeconfig: {Name:mk0d29b3b0ee1fd71729aff31f901be50a2d0664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:20:43.219961    8513 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0110 08:20:43.219992    8513 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 08:20:43.220049    8513 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0110 08:20:43.220180    8513 addons.go:70] Setting default-storageclass=true in profile "addons-910183"
	I0110 08:20:43.220206    8513 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-910183"
	I0110 08:20:43.220216    8513 addons.go:70] Setting yakd=true in profile "addons-910183"
	I0110 08:20:43.220238    8513 addons.go:239] Setting addon yakd=true in "addons-910183"
	I0110 08:20:43.220258    8513 addons.go:70] Setting ingress=true in profile "addons-910183"
	I0110 08:20:43.220280    8513 host.go:66] Checking if "addons-910183" exists ...
	I0110 08:20:43.220292    8513 config.go:182] Loaded profile config "addons-910183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:20:43.220288    8513 addons.go:70] Setting ingress-dns=true in profile "addons-910183"
	I0110 08:20:43.220311    8513 addons.go:70] Setting cloud-spanner=true in profile "addons-910183"
	I0110 08:20:43.220323    8513 addons.go:239] Setting addon ingress-dns=true in "addons-910183"
	I0110 08:20:43.220326    8513 addons.go:239] Setting addon cloud-spanner=true in "addons-910183"
	I0110 08:20:43.220349    8513 addons.go:70] Setting registry-creds=true in profile "addons-910183"
	I0110 08:20:43.220361    8513 host.go:66] Checking if "addons-910183" exists ...
	I0110 08:20:43.220373    8513 addons.go:70] Setting metrics-server=true in profile "addons-910183"
	I0110 08:20:43.220375    8513 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-910183"
	I0110 08:20:43.220389    8513 addons.go:239] Setting addon metrics-server=true in "addons-910183"
	I0110 08:20:43.220395    8513 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-910183"
	I0110 08:20:43.220403    8513 addons.go:70] Setting registry=true in profile "addons-910183"
	I0110 08:20:43.220410    8513 host.go:66] Checking if "addons-910183" exists ...
	I0110 08:20:43.220410    8513 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-910183"
	I0110 08:20:43.220419    8513 addons.go:239] Setting addon registry=true in "addons-910183"
	I0110 08:20:43.220445    8513 host.go:66] Checking if "addons-910183" exists ...
	I0110 08:20:43.220593    8513 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Status}}
	I0110 08:20:43.220755    8513 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Status}}
	I0110 08:20:43.220769    8513 addons.go:70] Setting volcano=true in profile "addons-910183"
	I0110 08:20:43.220782    8513 addons.go:239] Setting addon volcano=true in "addons-910183"
	I0110 08:20:43.220800    8513 host.go:66] Checking if "addons-910183" exists ...
	I0110 08:20:43.220879    8513 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Status}}
	I0110 08:20:43.220886    8513 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-910183"
	I0110 08:20:43.220903    8513 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-910183"
	I0110 08:20:43.220900    8513 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Status}}
	I0110 08:20:43.220927    8513 host.go:66] Checking if "addons-910183" exists ...
	I0110 08:20:43.220969    8513 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Status}}
	I0110 08:20:43.221262    8513 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Status}}
	I0110 08:20:43.221788    8513 addons.go:70] Setting storage-provisioner=true in profile "addons-910183"
	I0110 08:20:43.222048    8513 addons.go:239] Setting addon storage-provisioner=true in "addons-910183"
	I0110 08:20:43.222079    8513 host.go:66] Checking if "addons-910183" exists ...
	I0110 08:20:43.222521    8513 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Status}}
	I0110 08:20:43.223141    8513 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-910183"
	I0110 08:20:43.223214    8513 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-910183"
	I0110 08:20:43.223240    8513 host.go:66] Checking if "addons-910183" exists ...
	I0110 08:20:43.220361    8513 host.go:66] Checking if "addons-910183" exists ...
	I0110 08:20:43.223562    8513 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Status}}
	I0110 08:20:43.223769    8513 out.go:179] * Verifying Kubernetes components...
	I0110 08:20:43.224171    8513 addons.go:70] Setting inspektor-gadget=true in profile "addons-910183"
	I0110 08:20:43.220366    8513 addons.go:239] Setting addon registry-creds=true in "addons-910183"
	I0110 08:20:43.224198    8513 addons.go:239] Setting addon inspektor-gadget=true in "addons-910183"
	I0110 08:20:43.224221    8513 host.go:66] Checking if "addons-910183" exists ...
	I0110 08:20:43.224243    8513 host.go:66] Checking if "addons-910183" exists ...
	I0110 08:20:43.220758    8513 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Status}}
	I0110 08:20:43.224674    8513 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Status}}
	I0110 08:20:43.220302    8513 addons.go:239] Setting addon ingress=true in "addons-910183"
	I0110 08:20:43.224701    8513 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Status}}
	I0110 08:20:43.224805    8513 host.go:66] Checking if "addons-910183" exists ...
	I0110 08:20:43.224944    8513 addons.go:70] Setting volumesnapshots=true in profile "addons-910183"
	I0110 08:20:43.225705    8513 addons.go:239] Setting addon volumesnapshots=true in "addons-910183"
	I0110 08:20:43.220394    8513 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-910183"
	I0110 08:20:43.225444    8513 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Status}}
	I0110 08:20:43.225892    8513 host.go:66] Checking if "addons-910183" exists ...
	I0110 08:20:43.226515    8513 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Status}}
	I0110 08:20:43.226933    8513 host.go:66] Checking if "addons-910183" exists ...
	I0110 08:20:43.227504    8513 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Status}}
	I0110 08:20:43.225524    8513 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:20:43.220260    8513 addons.go:70] Setting gcp-auth=true in profile "addons-910183"
	I0110 08:20:43.228053    8513 mustload.go:66] Loading cluster: addons-910183
	I0110 08:20:43.234562    8513 config.go:182] Loaded profile config "addons-910183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:20:43.234974    8513 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Status}}
	I0110 08:20:43.238019    8513 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Status}}
	I0110 08:20:43.238485    8513 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Status}}
	I0110 08:20:43.279193    8513 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0110 08:20:43.286776    8513 out.go:179]   - Using image docker.io/registry:3.0.0
	W0110 08:20:43.288889    8513 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0110 08:20:43.295133    8513 addons.go:239] Setting addon default-storageclass=true in "addons-910183"
	I0110 08:20:43.295178    8513 host.go:66] Checking if "addons-910183" exists ...
	I0110 08:20:43.295651    8513 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Status}}
	I0110 08:20:43.296782    8513 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0110 08:20:43.296938    8513 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I0110 08:20:43.296957    8513 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0110 08:20:43.297009    8513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:20:43.298073    8513 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0110 08:20:43.298095    8513 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0110 08:20:43.298142    8513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:20:43.304148    8513 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.46
	I0110 08:20:43.305396    8513 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I0110 08:20:43.305415    8513 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0110 08:20:43.305472    8513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:20:43.310618    8513 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0110 08:20:43.312504    8513 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0110 08:20:43.312528    8513 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0110 08:20:43.312613    8513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:20:43.320591    8513 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-910183"
	I0110 08:20:43.320761    8513 host.go:66] Checking if "addons-910183" exists ...
	I0110 08:20:43.321012    8513 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.1
	I0110 08:20:43.321494    8513 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Status}}
	I0110 08:20:43.322128    8513 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0110 08:20:43.322145    8513 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0110 08:20:43.322193    8513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:20:43.322327    8513 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 08:20:43.323454    8513 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 08:20:43.323472    8513 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 08:20:43.323533    8513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:20:43.328288    8513 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0110 08:20:43.330578    8513 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0110 08:20:43.331863    8513 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0110 08:20:43.333016    8513 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I0110 08:20:43.335567    8513 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0110 08:20:43.335590    8513 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0110 08:20:43.336431    8513 host.go:66] Checking if "addons-910183" exists ...
	I0110 08:20:43.338417    8513 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0110 08:20:43.338522    8513 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0110 08:20:43.338537    8513 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0110 08:20:43.338591    8513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:20:43.338852    8513 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I0110 08:20:43.340625    8513 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0110 08:20:43.341999    8513 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0110 08:20:43.342788    8513 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.48.0
	I0110 08:20:43.345754    8513 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0110 08:20:43.346445    8513 out.go:179]   - Using image ghcr.io/manusa/yakd:0.0.7
	I0110 08:20:43.347332    8513 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0110 08:20:43.347357    8513 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0110 08:20:43.347412    8513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:20:43.348019    8513 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I0110 08:20:43.348104    8513 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0110 08:20:43.348121    8513 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0110 08:20:43.348205    8513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:20:43.348260    8513 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0110 08:20:43.348274    8513 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0110 08:20:43.348320    8513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:20:43.349451    8513 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0110 08:20:43.349482    8513 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16257 bytes)
	I0110 08:20:43.349520    8513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:20:43.375288    8513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa Username:docker}
	I0110 08:20:43.376124    8513 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0110 08:20:43.376585    8513 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0110 08:20:43.378700    8513 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0110 08:20:43.378717    8513 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0110 08:20:43.378815    8513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:20:43.379045    8513 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0110 08:20:43.379056    8513 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0110 08:20:43.379109    8513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:20:43.383008    8513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa Username:docker}
	I0110 08:20:43.399011    8513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa Username:docker}
	I0110 08:20:43.408598    8513 out.go:179]   - Using image docker.io/busybox:stable
	I0110 08:20:43.409739    8513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa Username:docker}
	I0110 08:20:43.410925    8513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa Username:docker}
	I0110 08:20:43.411262    8513 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0110 08:20:43.412846    8513 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0110 08:20:43.414116    8513 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0110 08:20:43.414183    8513 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0110 08:20:43.414280    8513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:20:43.420084    8513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa Username:docker}
	I0110 08:20:43.422219    8513 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 08:20:43.422237    8513 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 08:20:43.422293    8513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:20:43.422962    8513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa Username:docker}
	I0110 08:20:43.423817    8513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa Username:docker}
	I0110 08:20:43.423994    8513 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 08:20:43.426677    8513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa Username:docker}
	I0110 08:20:43.435127    8513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa Username:docker}
	I0110 08:20:43.436693    8513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa Username:docker}
	I0110 08:20:43.450659    8513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa Username:docker}
	I0110 08:20:43.454015    8513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa Username:docker}
	I0110 08:20:43.471619    8513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa Username:docker}
	I0110 08:20:43.479883    8513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa Username:docker}
	I0110 08:20:43.560039    8513 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0110 08:20:43.580092    8513 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0110 08:20:43.580118    8513 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0110 08:20:43.593838    8513 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0110 08:20:43.596409    8513 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 08:20:43.603080    8513 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0110 08:20:43.603103    8513 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0110 08:20:43.612896    8513 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0110 08:20:43.612920    8513 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0110 08:20:43.618305    8513 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0110 08:20:43.618334    8513 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0110 08:20:43.619822    8513 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I0110 08:20:43.619998    8513 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0110 08:20:43.620380    8513 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0110 08:20:43.627650    8513 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0110 08:20:43.628150    8513 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0110 08:20:43.628173    8513 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0110 08:20:43.630600    8513 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0110 08:20:43.638533    8513 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0110 08:20:43.648137    8513 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0110 08:20:43.648160    8513 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0110 08:20:43.649649    8513 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I0110 08:20:43.656030    8513 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0110 08:20:43.656055    8513 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0110 08:20:43.657342    8513 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 08:20:43.665493    8513 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0110 08:20:43.665516    8513 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0110 08:20:43.674054    8513 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0110 08:20:43.685668    8513 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0110 08:20:43.685704    8513 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0110 08:20:43.688446    8513 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0110 08:20:43.688473    8513 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0110 08:20:43.707484    8513 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0110 08:20:43.707514    8513 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0110 08:20:43.713228    8513 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0110 08:20:43.713252    8513 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0110 08:20:43.738604    8513 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0110 08:20:43.753815    8513 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0110 08:20:43.753906    8513 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0110 08:20:43.754149    8513 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0110 08:20:43.754217    8513 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0110 08:20:43.754193    8513 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0110 08:20:43.774561    8513 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0110 08:20:43.774582    8513 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0110 08:20:43.808300    8513 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0110 08:20:43.808331    8513 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0110 08:20:43.812182    8513 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0110 08:20:43.832337    8513 start.go:987] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0110 08:20:43.833387    8513 node_ready.go:35] waiting up to 6m0s for node "addons-910183" to be "Ready" ...
	I0110 08:20:43.840563    8513 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0110 08:20:43.840647    8513 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2013 bytes)
	I0110 08:20:43.870950    8513 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0110 08:20:43.871042    8513 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0110 08:20:43.898834    8513 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0110 08:20:43.961681    8513 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0110 08:20:43.961708    8513 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0110 08:20:44.081763    8513 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0110 08:20:44.081789    8513 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0110 08:20:44.143255    8513 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0110 08:20:44.143277    8513 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0110 08:20:44.200609    8513 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0110 08:20:44.200638    8513 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0110 08:20:44.254350    8513 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0110 08:20:44.254392    8513 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0110 08:20:44.292477    8513 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0110 08:20:44.337815    8513 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-910183" context rescaled to 1 replicas
	I0110 08:20:44.992625    8513 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.398743104s)
	I0110 08:20:44.992674    8513 addons.go:495] Verifying addon ingress=true in "addons-910183"
	I0110 08:20:44.992905    8513 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.396438203s)
	I0110 08:20:44.992966    8513 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.372561286s)
	I0110 08:20:44.993064    8513 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.365389255s)
	I0110 08:20:44.993116    8513 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.362490214s)
	I0110 08:20:44.993150    8513 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.354591051s)
	I0110 08:20:44.993261    8513 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.34336395s)
	I0110 08:20:44.993299    8513 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.335936834s)
	I0110 08:20:44.993382    8513 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.319305841s)
	I0110 08:20:44.993431    8513 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.254794102s)
	I0110 08:20:44.993445    8513 addons.go:495] Verifying addon registry=true in "addons-910183"
	I0110 08:20:44.993503    8513 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.239049508s)
	I0110 08:20:44.993690    8513 addons.go:495] Verifying addon metrics-server=true in "addons-910183"
	I0110 08:20:44.994351    8513 out.go:179] * Verifying ingress addon...
	I0110 08:20:44.995176    8513 out.go:179] * Verifying registry addon...
	I0110 08:20:44.997419    8513 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0110 08:20:44.998445    8513 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0110 08:20:45.006608    8513 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0110 08:20:45.006634    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:20:45.006751    8513 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0110 08:20:45.006831    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0110 08:20:45.010202    8513 out.go:285] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0110 08:20:45.447723    8513 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.63549283s)
	W0110 08:20:45.447783    8513 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0110 08:20:45.447818    8513 retry.go:84] will retry after 300ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0110 08:20:45.447865    8513 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.548930095s)
	I0110 08:20:45.448149    8513 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.155627896s)
	I0110 08:20:45.448181    8513 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-910183"
	I0110 08:20:45.449992    8513 out.go:179] * Verifying csi-hostpath-driver addon...
	I0110 08:20:45.449989    8513 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-910183 service yakd-dashboard -n yakd-dashboard
	
	I0110 08:20:45.453833    8513 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0110 08:20:45.457602    8513 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0110 08:20:45.457621    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:20:45.500307    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:20:45.501275    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:20:45.754949    8513 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W0110 08:20:45.836592    8513 node_ready.go:57] node "addons-910183" has "Ready":"False" status (will retry)
	I0110 08:20:45.957237    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:20:46.003239    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:20:46.003495    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:20:46.456794    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:20:46.500196    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:20:46.501343    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:20:46.957253    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:20:47.000429    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:20:47.001444    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:20:47.457403    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:20:47.500780    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:20:47.501004    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:20:47.956310    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:20:48.000945    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:20:48.001000    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:20:48.244099    8513 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.489101442s)
	W0110 08:20:48.336752    8513 node_ready.go:57] node "addons-910183" has "Ready":"False" status (will retry)
	I0110 08:20:48.456966    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:20:48.557205    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:20:48.557416    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:20:48.956820    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:20:49.000288    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:20:49.001303    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:20:49.456813    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:20:49.500220    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:20:49.501077    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:20:49.956963    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:20:50.000342    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:20:50.001312    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:20:50.457694    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:20:50.500094    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:20:50.501148    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0110 08:20:50.837214    8513 node_ready.go:57] node "addons-910183" has "Ready":"False" status (will retry)
	I0110 08:20:50.942821    8513 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0110 08:20:50.942883    8513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:20:50.956904    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:20:50.962716    8513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa Username:docker}
	I0110 08:20:51.000355    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:20:51.001156    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:20:51.060711    8513 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0110 08:20:51.073135    8513 addons.go:239] Setting addon gcp-auth=true in "addons-910183"
	I0110 08:20:51.073198    8513 host.go:66] Checking if "addons-910183" exists ...
	I0110 08:20:51.073554    8513 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Status}}
	I0110 08:20:51.091353    8513 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0110 08:20:51.091407    8513 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:20:51.108223    8513 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa Username:docker}
	I0110 08:20:51.199355    8513 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0110 08:20:51.200384    8513 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I0110 08:20:51.201612    8513 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0110 08:20:51.201628    8513 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0110 08:20:51.213888    8513 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0110 08:20:51.213906    8513 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0110 08:20:51.226700    8513 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0110 08:20:51.226719    8513 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0110 08:20:51.238703    8513 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0110 08:20:51.456817    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:20:51.500665    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:20:51.500717    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:20:51.533709    8513 addons.go:495] Verifying addon gcp-auth=true in "addons-910183"
	I0110 08:20:51.535027    8513 out.go:179] * Verifying gcp-auth addon...
	I0110 08:20:51.536921    8513 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0110 08:20:51.601351    8513 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0110 08:20:51.601370    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:20:51.956405    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:20:52.001031    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:20:52.001190    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:20:52.039225    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:20:52.457266    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:20:52.500829    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:20:52.501137    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:20:52.539856    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:20:52.958696    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:20:53.000224    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:20:53.001270    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:20:53.039449    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0110 08:20:53.336191    8513 node_ready.go:57] node "addons-910183" has "Ready":"False" status (will retry)
	I0110 08:20:53.456687    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:20:53.500213    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:20:53.501237    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:20:53.539436    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:20:53.956630    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:20:53.999927    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:20:54.000815    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:20:54.039845    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:20:54.457355    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:20:54.500511    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:20:54.500824    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:20:54.540065    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:20:54.957048    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:20:55.000618    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:20:55.001514    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:20:55.039997    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0110 08:20:55.336672    8513 node_ready.go:57] node "addons-910183" has "Ready":"False" status (will retry)
	I0110 08:20:55.457081    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:20:55.500574    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:20:55.501424    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:20:55.539849    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:20:55.957477    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:20:56.000724    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:20:56.000861    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:20:56.040280    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:20:56.336583    8513 node_ready.go:49] node "addons-910183" is "Ready"
	I0110 08:20:56.336618    8513 node_ready.go:38] duration metric: took 12.502920496s for node "addons-910183" to be "Ready" ...
	I0110 08:20:56.336638    8513 api_server.go:52] waiting for apiserver process to appear ...
	I0110 08:20:56.336796    8513 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 08:20:56.352257    8513 api_server.go:72] duration metric: took 13.132237805s to wait for apiserver process to appear ...
	I0110 08:20:56.352283    8513 api_server.go:88] waiting for apiserver healthz status ...
	I0110 08:20:56.352299    8513 api_server.go:299] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0110 08:20:56.356569    8513 api_server.go:325] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0110 08:20:56.357402    8513 api_server.go:141] control plane version: v1.35.0
	I0110 08:20:56.357427    8513 api_server.go:131] duration metric: took 5.138059ms to wait for apiserver health ...
	I0110 08:20:56.357434    8513 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 08:20:56.360408    8513 system_pods.go:59] 20 kube-system pods found
	I0110 08:20:56.360433    8513 system_pods.go:61] "amd-gpu-device-plugin-8zrhw" [3e3fd217-ef7c-4343-8d3f-a9f9e21b8dcc] Pending
	I0110 08:20:56.360441    8513 system_pods.go:61] "coredns-7d764666f9-qcg8s" [ab0f9d61-b753-4c7d-b89d-cb433d88af42] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 08:20:56.360448    8513 system_pods.go:61] "csi-hostpath-attacher-0" [64530e45-d14b-4ca5-a7b4-981b3788db1e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0110 08:20:56.360452    8513 system_pods.go:61] "csi-hostpath-resizer-0" [233a8188-a0ab-4281-ba5a-19985a69707d] Pending
	I0110 08:20:56.360458    8513 system_pods.go:61] "csi-hostpathplugin-pmk7s" [5518d18c-2833-4218-896d-c211a98be032] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0110 08:20:56.360462    8513 system_pods.go:61] "etcd-addons-910183" [765064e9-8f42-42af-9f3f-c62cde25f958] Running
	I0110 08:20:56.360465    8513 system_pods.go:61] "kindnet-nz7j6" [275ea9b2-1ddd-4a09-8b7e-32b1bdc84a60] Running
	I0110 08:20:56.360468    8513 system_pods.go:61] "kube-apiserver-addons-910183" [64927929-5cc6-4c1b-a3df-e9e2d0849bde] Running
	I0110 08:20:56.360471    8513 system_pods.go:61] "kube-controller-manager-addons-910183" [72e1f040-3937-433c-9cf0-15a81f836f57] Running
	I0110 08:20:56.360477    8513 system_pods.go:61] "kube-ingress-dns-minikube" [7e62e8f8-f9a8-40c7-9fe4-99a24cd43a7f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0110 08:20:56.360480    8513 system_pods.go:61] "kube-proxy-zww5l" [81d50269-fb7d-43ee-81b7-ebe5a0ebaedd] Running
	I0110 08:20:56.360484    8513 system_pods.go:61] "kube-scheduler-addons-910183" [4fc6d8ff-70ca-4276-bdec-9d150f2edaf0] Running
	I0110 08:20:56.360489    8513 system_pods.go:61] "metrics-server-5778bb4788-228dp" [15f7e27d-e734-419e-bc25-0a33689452ac] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0110 08:20:56.360499    8513 system_pods.go:61] "nvidia-device-plugin-daemonset-cr698" [35c26c44-934c-4ac5-af77-fe7d9272e2d9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0110 08:20:56.360506    8513 system_pods.go:61] "registry-788cd7d5bc-49v9n" [7bef0fc4-c5cd-407e-b2f7-9ee69fbf6b75] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0110 08:20:56.360514    8513 system_pods.go:61] "registry-creds-567fb78d95-6qnq9" [03397d64-2905-4e30-b560-552bbbed8823] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0110 08:20:56.360517    8513 system_pods.go:61] "registry-proxy-zvnnr" [c3d911de-bdc5-41e1-ac2d-29832823bf99] Pending
	I0110 08:20:56.360521    8513 system_pods.go:61] "snapshot-controller-6588d87457-fgv47" [66725761-2269-4e3e-9dd0-a4c457b4dd17] Pending
	I0110 08:20:56.360526    8513 system_pods.go:61] "snapshot-controller-6588d87457-vgbtz" [9ab3a4d0-c0fe-40a5-a3bf-4ac0873b6555] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0110 08:20:56.360530    8513 system_pods.go:61] "storage-provisioner" [5f492e9d-591d-4826-bdde-390c6db3c489] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 08:20:56.360535    8513 system_pods.go:74] duration metric: took 3.096456ms to wait for pod list to return data ...
	I0110 08:20:56.360544    8513 default_sa.go:34] waiting for default service account to be created ...
	I0110 08:20:56.362399    8513 default_sa.go:45] found service account: "default"
	I0110 08:20:56.362417    8513 default_sa.go:55] duration metric: took 1.867673ms for default service account to be created ...
	I0110 08:20:56.362425    8513 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 08:20:56.365545    8513 system_pods.go:86] 20 kube-system pods found
	I0110 08:20:56.365568    8513 system_pods.go:89] "amd-gpu-device-plugin-8zrhw" [3e3fd217-ef7c-4343-8d3f-a9f9e21b8dcc] Pending
	I0110 08:20:56.365577    8513 system_pods.go:89] "coredns-7d764666f9-qcg8s" [ab0f9d61-b753-4c7d-b89d-cb433d88af42] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 08:20:56.365587    8513 system_pods.go:89] "csi-hostpath-attacher-0" [64530e45-d14b-4ca5-a7b4-981b3788db1e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0110 08:20:56.365593    8513 system_pods.go:89] "csi-hostpath-resizer-0" [233a8188-a0ab-4281-ba5a-19985a69707d] Pending
	I0110 08:20:56.365603    8513 system_pods.go:89] "csi-hostpathplugin-pmk7s" [5518d18c-2833-4218-896d-c211a98be032] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0110 08:20:56.365613    8513 system_pods.go:89] "etcd-addons-910183" [765064e9-8f42-42af-9f3f-c62cde25f958] Running
	I0110 08:20:56.365619    8513 system_pods.go:89] "kindnet-nz7j6" [275ea9b2-1ddd-4a09-8b7e-32b1bdc84a60] Running
	I0110 08:20:56.365627    8513 system_pods.go:89] "kube-apiserver-addons-910183" [64927929-5cc6-4c1b-a3df-e9e2d0849bde] Running
	I0110 08:20:56.365633    8513 system_pods.go:89] "kube-controller-manager-addons-910183" [72e1f040-3937-433c-9cf0-15a81f836f57] Running
	I0110 08:20:56.365647    8513 system_pods.go:89] "kube-ingress-dns-minikube" [7e62e8f8-f9a8-40c7-9fe4-99a24cd43a7f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0110 08:20:56.365656    8513 system_pods.go:89] "kube-proxy-zww5l" [81d50269-fb7d-43ee-81b7-ebe5a0ebaedd] Running
	I0110 08:20:56.365663    8513 system_pods.go:89] "kube-scheduler-addons-910183" [4fc6d8ff-70ca-4276-bdec-9d150f2edaf0] Running
	I0110 08:20:56.365673    8513 system_pods.go:89] "metrics-server-5778bb4788-228dp" [15f7e27d-e734-419e-bc25-0a33689452ac] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0110 08:20:56.365687    8513 system_pods.go:89] "nvidia-device-plugin-daemonset-cr698" [35c26c44-934c-4ac5-af77-fe7d9272e2d9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0110 08:20:56.365749    8513 system_pods.go:89] "registry-788cd7d5bc-49v9n" [7bef0fc4-c5cd-407e-b2f7-9ee69fbf6b75] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0110 08:20:56.365764    8513 system_pods.go:89] "registry-creds-567fb78d95-6qnq9" [03397d64-2905-4e30-b560-552bbbed8823] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0110 08:20:56.365770    8513 system_pods.go:89] "registry-proxy-zvnnr" [c3d911de-bdc5-41e1-ac2d-29832823bf99] Pending
	I0110 08:20:56.365780    8513 system_pods.go:89] "snapshot-controller-6588d87457-fgv47" [66725761-2269-4e3e-9dd0-a4c457b4dd17] Pending
	I0110 08:20:56.365788    8513 system_pods.go:89] "snapshot-controller-6588d87457-vgbtz" [9ab3a4d0-c0fe-40a5-a3bf-4ac0873b6555] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0110 08:20:56.365795    8513 system_pods.go:89] "storage-provisioner" [5f492e9d-591d-4826-bdde-390c6db3c489] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 08:20:56.365817    8513 retry.go:84] will retry after 300ms: missing components: kube-dns
	I0110 08:20:56.465423    8513 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0110 08:20:56.465448    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:20:56.500819    8513 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0110 08:20:56.500837    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:20:56.500859    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:20:56.566227    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:20:56.669109    8513 system_pods.go:86] 20 kube-system pods found
	I0110 08:20:56.669149    8513 system_pods.go:89] "amd-gpu-device-plugin-8zrhw" [3e3fd217-ef7c-4343-8d3f-a9f9e21b8dcc] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0110 08:20:56.669162    8513 system_pods.go:89] "coredns-7d764666f9-qcg8s" [ab0f9d61-b753-4c7d-b89d-cb433d88af42] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 08:20:56.669178    8513 system_pods.go:89] "csi-hostpath-attacher-0" [64530e45-d14b-4ca5-a7b4-981b3788db1e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0110 08:20:56.669190    8513 system_pods.go:89] "csi-hostpath-resizer-0" [233a8188-a0ab-4281-ba5a-19985a69707d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0110 08:20:56.669222    8513 system_pods.go:89] "csi-hostpathplugin-pmk7s" [5518d18c-2833-4218-896d-c211a98be032] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0110 08:20:56.669232    8513 system_pods.go:89] "etcd-addons-910183" [765064e9-8f42-42af-9f3f-c62cde25f958] Running
	I0110 08:20:56.669244    8513 system_pods.go:89] "kindnet-nz7j6" [275ea9b2-1ddd-4a09-8b7e-32b1bdc84a60] Running
	I0110 08:20:56.669252    8513 system_pods.go:89] "kube-apiserver-addons-910183" [64927929-5cc6-4c1b-a3df-e9e2d0849bde] Running
	I0110 08:20:56.669258    8513 system_pods.go:89] "kube-controller-manager-addons-910183" [72e1f040-3937-433c-9cf0-15a81f836f57] Running
	I0110 08:20:56.669269    8513 system_pods.go:89] "kube-ingress-dns-minikube" [7e62e8f8-f9a8-40c7-9fe4-99a24cd43a7f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0110 08:20:56.669273    8513 system_pods.go:89] "kube-proxy-zww5l" [81d50269-fb7d-43ee-81b7-ebe5a0ebaedd] Running
	I0110 08:20:56.669279    8513 system_pods.go:89] "kube-scheduler-addons-910183" [4fc6d8ff-70ca-4276-bdec-9d150f2edaf0] Running
	I0110 08:20:56.669292    8513 system_pods.go:89] "metrics-server-5778bb4788-228dp" [15f7e27d-e734-419e-bc25-0a33689452ac] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0110 08:20:56.669300    8513 system_pods.go:89] "nvidia-device-plugin-daemonset-cr698" [35c26c44-934c-4ac5-af77-fe7d9272e2d9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0110 08:20:56.669313    8513 system_pods.go:89] "registry-788cd7d5bc-49v9n" [7bef0fc4-c5cd-407e-b2f7-9ee69fbf6b75] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0110 08:20:56.669325    8513 system_pods.go:89] "registry-creds-567fb78d95-6qnq9" [03397d64-2905-4e30-b560-552bbbed8823] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0110 08:20:56.669332    8513 system_pods.go:89] "registry-proxy-zvnnr" [c3d911de-bdc5-41e1-ac2d-29832823bf99] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0110 08:20:56.669344    8513 system_pods.go:89] "snapshot-controller-6588d87457-fgv47" [66725761-2269-4e3e-9dd0-a4c457b4dd17] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0110 08:20:56.669355    8513 system_pods.go:89] "snapshot-controller-6588d87457-vgbtz" [9ab3a4d0-c0fe-40a5-a3bf-4ac0873b6555] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0110 08:20:56.669362    8513 system_pods.go:89] "storage-provisioner" [5f492e9d-591d-4826-bdde-390c6db3c489] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 08:20:56.948408    8513 system_pods.go:86] 20 kube-system pods found
	I0110 08:20:56.948449    8513 system_pods.go:89] "amd-gpu-device-plugin-8zrhw" [3e3fd217-ef7c-4343-8d3f-a9f9e21b8dcc] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0110 08:20:56.948461    8513 system_pods.go:89] "coredns-7d764666f9-qcg8s" [ab0f9d61-b753-4c7d-b89d-cb433d88af42] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 08:20:56.948470    8513 system_pods.go:89] "csi-hostpath-attacher-0" [64530e45-d14b-4ca5-a7b4-981b3788db1e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0110 08:20:56.948479    8513 system_pods.go:89] "csi-hostpath-resizer-0" [233a8188-a0ab-4281-ba5a-19985a69707d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0110 08:20:56.948489    8513 system_pods.go:89] "csi-hostpathplugin-pmk7s" [5518d18c-2833-4218-896d-c211a98be032] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0110 08:20:56.948498    8513 system_pods.go:89] "etcd-addons-910183" [765064e9-8f42-42af-9f3f-c62cde25f958] Running
	I0110 08:20:56.948504    8513 system_pods.go:89] "kindnet-nz7j6" [275ea9b2-1ddd-4a09-8b7e-32b1bdc84a60] Running
	I0110 08:20:56.948512    8513 system_pods.go:89] "kube-apiserver-addons-910183" [64927929-5cc6-4c1b-a3df-e9e2d0849bde] Running
	I0110 08:20:56.948518    8513 system_pods.go:89] "kube-controller-manager-addons-910183" [72e1f040-3937-433c-9cf0-15a81f836f57] Running
	I0110 08:20:56.948529    8513 system_pods.go:89] "kube-ingress-dns-minikube" [7e62e8f8-f9a8-40c7-9fe4-99a24cd43a7f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0110 08:20:56.948538    8513 system_pods.go:89] "kube-proxy-zww5l" [81d50269-fb7d-43ee-81b7-ebe5a0ebaedd] Running
	I0110 08:20:56.948544    8513 system_pods.go:89] "kube-scheduler-addons-910183" [4fc6d8ff-70ca-4276-bdec-9d150f2edaf0] Running
	I0110 08:20:56.948555    8513 system_pods.go:89] "metrics-server-5778bb4788-228dp" [15f7e27d-e734-419e-bc25-0a33689452ac] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0110 08:20:56.948568    8513 system_pods.go:89] "nvidia-device-plugin-daemonset-cr698" [35c26c44-934c-4ac5-af77-fe7d9272e2d9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0110 08:20:56.948576    8513 system_pods.go:89] "registry-788cd7d5bc-49v9n" [7bef0fc4-c5cd-407e-b2f7-9ee69fbf6b75] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0110 08:20:56.948584    8513 system_pods.go:89] "registry-creds-567fb78d95-6qnq9" [03397d64-2905-4e30-b560-552bbbed8823] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0110 08:20:56.948593    8513 system_pods.go:89] "registry-proxy-zvnnr" [c3d911de-bdc5-41e1-ac2d-29832823bf99] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0110 08:20:56.948601    8513 system_pods.go:89] "snapshot-controller-6588d87457-fgv47" [66725761-2269-4e3e-9dd0-a4c457b4dd17] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0110 08:20:56.948609    8513 system_pods.go:89] "snapshot-controller-6588d87457-vgbtz" [9ab3a4d0-c0fe-40a5-a3bf-4ac0873b6555] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0110 08:20:56.948618    8513 system_pods.go:89] "storage-provisioner" [5f492e9d-591d-4826-bdde-390c6db3c489] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 08:20:57.046540    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:20:57.046664    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:20:57.047103    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:20:57.047335    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:20:57.271197    8513 system_pods.go:86] 20 kube-system pods found
	I0110 08:20:57.271278    8513 system_pods.go:89] "amd-gpu-device-plugin-8zrhw" [3e3fd217-ef7c-4343-8d3f-a9f9e21b8dcc] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0110 08:20:57.271292    8513 system_pods.go:89] "coredns-7d764666f9-qcg8s" [ab0f9d61-b753-4c7d-b89d-cb433d88af42] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 08:20:57.271303    8513 system_pods.go:89] "csi-hostpath-attacher-0" [64530e45-d14b-4ca5-a7b4-981b3788db1e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0110 08:20:57.271315    8513 system_pods.go:89] "csi-hostpath-resizer-0" [233a8188-a0ab-4281-ba5a-19985a69707d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0110 08:20:57.271328    8513 system_pods.go:89] "csi-hostpathplugin-pmk7s" [5518d18c-2833-4218-896d-c211a98be032] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0110 08:20:57.271334    8513 system_pods.go:89] "etcd-addons-910183" [765064e9-8f42-42af-9f3f-c62cde25f958] Running
	I0110 08:20:57.271341    8513 system_pods.go:89] "kindnet-nz7j6" [275ea9b2-1ddd-4a09-8b7e-32b1bdc84a60] Running
	I0110 08:20:57.271364    8513 system_pods.go:89] "kube-apiserver-addons-910183" [64927929-5cc6-4c1b-a3df-e9e2d0849bde] Running
	I0110 08:20:57.271370    8513 system_pods.go:89] "kube-controller-manager-addons-910183" [72e1f040-3937-433c-9cf0-15a81f836f57] Running
	I0110 08:20:57.271380    8513 system_pods.go:89] "kube-ingress-dns-minikube" [7e62e8f8-f9a8-40c7-9fe4-99a24cd43a7f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0110 08:20:57.271394    8513 system_pods.go:89] "kube-proxy-zww5l" [81d50269-fb7d-43ee-81b7-ebe5a0ebaedd] Running
	I0110 08:20:57.271400    8513 system_pods.go:89] "kube-scheduler-addons-910183" [4fc6d8ff-70ca-4276-bdec-9d150f2edaf0] Running
	I0110 08:20:57.271412    8513 system_pods.go:89] "metrics-server-5778bb4788-228dp" [15f7e27d-e734-419e-bc25-0a33689452ac] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0110 08:20:57.271420    8513 system_pods.go:89] "nvidia-device-plugin-daemonset-cr698" [35c26c44-934c-4ac5-af77-fe7d9272e2d9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0110 08:20:57.271429    8513 system_pods.go:89] "registry-788cd7d5bc-49v9n" [7bef0fc4-c5cd-407e-b2f7-9ee69fbf6b75] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0110 08:20:57.271437    8513 system_pods.go:89] "registry-creds-567fb78d95-6qnq9" [03397d64-2905-4e30-b560-552bbbed8823] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0110 08:20:57.271449    8513 system_pods.go:89] "registry-proxy-zvnnr" [c3d911de-bdc5-41e1-ac2d-29832823bf99] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0110 08:20:57.271457    8513 system_pods.go:89] "snapshot-controller-6588d87457-fgv47" [66725761-2269-4e3e-9dd0-a4c457b4dd17] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0110 08:20:57.271471    8513 system_pods.go:89] "snapshot-controller-6588d87457-vgbtz" [9ab3a4d0-c0fe-40a5-a3bf-4ac0873b6555] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0110 08:20:57.271478    8513 system_pods.go:89] "storage-provisioner" [5f492e9d-591d-4826-bdde-390c6db3c489] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 08:20:57.457839    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:20:57.501645    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:20:57.501714    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:20:57.541120    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:20:57.715821    8513 system_pods.go:86] 20 kube-system pods found
	I0110 08:20:57.715859    8513 system_pods.go:89] "amd-gpu-device-plugin-8zrhw" [3e3fd217-ef7c-4343-8d3f-a9f9e21b8dcc] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0110 08:20:57.715867    8513 system_pods.go:89] "coredns-7d764666f9-qcg8s" [ab0f9d61-b753-4c7d-b89d-cb433d88af42] Running
	I0110 08:20:57.715878    8513 system_pods.go:89] "csi-hostpath-attacher-0" [64530e45-d14b-4ca5-a7b4-981b3788db1e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0110 08:20:57.715887    8513 system_pods.go:89] "csi-hostpath-resizer-0" [233a8188-a0ab-4281-ba5a-19985a69707d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0110 08:20:57.715898    8513 system_pods.go:89] "csi-hostpathplugin-pmk7s" [5518d18c-2833-4218-896d-c211a98be032] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0110 08:20:57.715904    8513 system_pods.go:89] "etcd-addons-910183" [765064e9-8f42-42af-9f3f-c62cde25f958] Running
	I0110 08:20:57.715911    8513 system_pods.go:89] "kindnet-nz7j6" [275ea9b2-1ddd-4a09-8b7e-32b1bdc84a60] Running
	I0110 08:20:57.715921    8513 system_pods.go:89] "kube-apiserver-addons-910183" [64927929-5cc6-4c1b-a3df-e9e2d0849bde] Running
	I0110 08:20:57.715927    8513 system_pods.go:89] "kube-controller-manager-addons-910183" [72e1f040-3937-433c-9cf0-15a81f836f57] Running
	I0110 08:20:57.715940    8513 system_pods.go:89] "kube-ingress-dns-minikube" [7e62e8f8-f9a8-40c7-9fe4-99a24cd43a7f] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0110 08:20:57.715947    8513 system_pods.go:89] "kube-proxy-zww5l" [81d50269-fb7d-43ee-81b7-ebe5a0ebaedd] Running
	I0110 08:20:57.715968    8513 system_pods.go:89] "kube-scheduler-addons-910183" [4fc6d8ff-70ca-4276-bdec-9d150f2edaf0] Running
	I0110 08:20:57.715978    8513 system_pods.go:89] "metrics-server-5778bb4788-228dp" [15f7e27d-e734-419e-bc25-0a33689452ac] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0110 08:20:57.715986    8513 system_pods.go:89] "nvidia-device-plugin-daemonset-cr698" [35c26c44-934c-4ac5-af77-fe7d9272e2d9] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0110 08:20:57.715993    8513 system_pods.go:89] "registry-788cd7d5bc-49v9n" [7bef0fc4-c5cd-407e-b2f7-9ee69fbf6b75] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0110 08:20:57.716007    8513 system_pods.go:89] "registry-creds-567fb78d95-6qnq9" [03397d64-2905-4e30-b560-552bbbed8823] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0110 08:20:57.716012    8513 system_pods.go:89] "registry-proxy-zvnnr" [c3d911de-bdc5-41e1-ac2d-29832823bf99] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0110 08:20:57.716020    8513 system_pods.go:89] "snapshot-controller-6588d87457-fgv47" [66725761-2269-4e3e-9dd0-a4c457b4dd17] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0110 08:20:57.716029    8513 system_pods.go:89] "snapshot-controller-6588d87457-vgbtz" [9ab3a4d0-c0fe-40a5-a3bf-4ac0873b6555] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0110 08:20:57.716034    8513 system_pods.go:89] "storage-provisioner" [5f492e9d-591d-4826-bdde-390c6db3c489] Running
	I0110 08:20:57.716044    8513 system_pods.go:126] duration metric: took 1.353612695s to wait for k8s-apps to be running ...
	I0110 08:20:57.716053    8513 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 08:20:57.716111    8513 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:20:57.733568    8513 system_svc.go:56] duration metric: took 17.505985ms WaitForService to wait for kubelet
	I0110 08:20:57.733599    8513 kubeadm.go:587] duration metric: took 14.51358177s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 08:20:57.733621    8513 node_conditions.go:102] verifying NodePressure condition ...
	I0110 08:20:57.736560    8513 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0110 08:20:57.736590    8513 node_conditions.go:123] node cpu capacity is 8
	I0110 08:20:57.736607    8513 node_conditions.go:105] duration metric: took 2.979945ms to run NodePressure ...
	I0110 08:20:57.736620    8513 start.go:242] waiting for startup goroutines ...
	I0110 08:20:57.957512    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:20:58.058262    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:20:58.058910    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:20:58.058935    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:20:58.457757    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:20:58.500667    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:20:58.500997    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:20:58.540220    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:20:58.958044    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:20:59.000819    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:20:59.001417    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:20:59.040111    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:20:59.457649    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:20:59.500851    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:20:59.501618    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:20:59.540415    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:20:59.957652    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:00.009698    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:00.009784    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:00.112634    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:00.457200    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:00.501184    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:00.501577    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:00.540282    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:00.957782    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:01.000382    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:01.001078    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:01.039221    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:01.457519    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:01.501584    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:01.501592    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:01.540173    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:01.957805    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:02.000607    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:02.001351    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:02.039620    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:02.456812    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:02.500363    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:02.501142    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:02.539552    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:02.958538    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:03.002855    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:03.003087    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:03.041054    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:03.457359    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:03.501361    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:03.501472    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:03.539847    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:03.957292    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:04.001052    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:04.001697    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:04.102085    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:04.457139    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:04.500414    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:04.501230    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:04.539611    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:04.958579    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:05.000713    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:05.001634    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:05.040200    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:05.457828    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:05.500821    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:05.501539    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:05.540199    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:05.957406    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:06.001360    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:06.001600    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:06.040326    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:06.457209    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:06.501420    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:06.501837    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:06.540499    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:06.970298    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:07.014391    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:07.014506    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:07.151863    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:07.457633    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:07.501374    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:07.501380    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:07.540212    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:07.957938    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:08.000715    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:08.001471    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:08.039828    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:08.457852    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:08.500577    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:08.501199    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:08.539395    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:08.957798    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:09.058458    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:09.058511    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:09.058572    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:09.457715    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:09.500128    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:09.501061    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:09.539949    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:10.093337    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:10.094242    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:10.094375    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:10.094587    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:10.457825    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:10.500145    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:10.501042    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:10.540265    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:10.957993    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:11.000818    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:11.001482    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:11.040170    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:11.458026    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:11.500946    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:11.501700    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:11.540848    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:11.957876    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:11.999928    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:12.001376    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:12.039503    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:12.458150    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:12.500963    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:12.501650    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:12.540044    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:12.958013    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:13.000146    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:13.001271    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:13.039568    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:13.456806    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:13.500490    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:13.501318    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:13.539427    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:13.957923    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:14.058950    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:14.058964    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:14.059021    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:14.457506    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:14.500826    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:14.500857    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:14.539946    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:14.957180    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:15.000480    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:15.001424    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:15.039869    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:15.456924    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:15.501029    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:15.501480    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:15.540015    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:15.957792    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:16.000909    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:16.001245    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:16.039810    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:16.458213    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:16.500987    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:16.501466    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:16.539987    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:16.957503    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:17.001132    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:17.001361    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:17.040113    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:17.458161    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:17.501101    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:17.501595    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:17.540017    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:17.958269    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:18.000710    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:18.001674    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:18.058599    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:18.457003    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:18.500476    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:18.501114    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:18.539043    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:18.957350    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:19.001068    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:19.001201    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:19.039980    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:19.456928    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:19.500797    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:19.501560    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:19.540397    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:19.958061    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:20.058888    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:20.059122    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:20.059125    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:20.457759    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:20.500588    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0110 08:21:20.501908    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:20.540645    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:20.957957    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:21.000620    8513 kapi.go:107] duration metric: took 36.003199903s to wait for kubernetes.io/minikube-addons=registry ...
	I0110 08:21:21.001240    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:21.039873    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:21.457158    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:21.501664    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:21.540389    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:21.957888    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:22.001455    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:22.057998    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:22.458473    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:22.503557    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:22.540890    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:22.959467    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:23.002547    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:23.041075    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:23.457032    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:23.501895    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:23.540790    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:23.956821    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:24.001361    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:24.039668    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:24.458081    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:24.501749    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:24.540341    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:24.958178    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:25.001713    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:25.040248    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:25.457621    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:25.502399    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:25.539928    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:25.956919    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:26.076995    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:26.077073    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:26.457013    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:26.557186    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:26.557250    8513 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0110 08:21:26.957283    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:27.001485    8513 kapi.go:107] duration metric: took 42.003039187s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0110 08:21:27.057349    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:27.460585    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:27.540309    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:27.958230    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:28.040802    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:28.459041    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:28.541016    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0110 08:21:28.957958    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:29.059030    8513 kapi.go:107] duration metric: took 37.522106256s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0110 08:21:29.061621    8513 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-910183 cluster.
	I0110 08:21:29.062855    8513 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0110 08:21:29.063862    8513 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0110 08:21:29.457227    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:29.958669    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:30.457454    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:30.957175    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:31.458639    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:31.957256    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:32.458068    8513 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0110 08:21:32.957177    8513 kapi.go:107] duration metric: took 47.503344293s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0110 08:21:32.959004    8513 out.go:179] * Enabled addons: cloud-spanner, storage-provisioner, nvidia-device-plugin, registry-creds, amd-gpu-device-plugin, inspektor-gadget, ingress-dns, metrics-server, default-storageclass, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0110 08:21:32.960261    8513 addons.go:530] duration metric: took 49.740210176s for enable addons: enabled=[cloud-spanner storage-provisioner nvidia-device-plugin registry-creds amd-gpu-device-plugin inspektor-gadget ingress-dns metrics-server default-storageclass yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0110 08:21:32.960306    8513 start.go:247] waiting for cluster config update ...
	I0110 08:21:32.960336    8513 start.go:256] writing updated cluster config ...
	I0110 08:21:32.960598    8513 ssh_runner.go:195] Run: rm -f paused
	I0110 08:21:32.964455    8513 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 08:21:32.968946    8513 pod_ready.go:83] waiting for pod "coredns-7d764666f9-qcg8s" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:21:32.972377    8513 pod_ready.go:94] pod "coredns-7d764666f9-qcg8s" is "Ready"
	I0110 08:21:32.972399    8513 pod_ready.go:86] duration metric: took 3.429395ms for pod "coredns-7d764666f9-qcg8s" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:21:32.974120    8513 pod_ready.go:83] waiting for pod "etcd-addons-910183" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:21:32.977247    8513 pod_ready.go:94] pod "etcd-addons-910183" is "Ready"
	I0110 08:21:32.977265    8513 pod_ready.go:86] duration metric: took 3.124437ms for pod "etcd-addons-910183" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:21:32.978828    8513 pod_ready.go:83] waiting for pod "kube-apiserver-addons-910183" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:21:32.981846    8513 pod_ready.go:94] pod "kube-apiserver-addons-910183" is "Ready"
	I0110 08:21:32.981868    8513 pod_ready.go:86] duration metric: took 3.021495ms for pod "kube-apiserver-addons-910183" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:21:32.983504    8513 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-910183" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:21:33.367819    8513 pod_ready.go:94] pod "kube-controller-manager-addons-910183" is "Ready"
	I0110 08:21:33.367844    8513 pod_ready.go:86] duration metric: took 384.326ms for pod "kube-controller-manager-addons-910183" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:21:33.567716    8513 pod_ready.go:83] waiting for pod "kube-proxy-zww5l" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:21:33.968530    8513 pod_ready.go:94] pod "kube-proxy-zww5l" is "Ready"
	I0110 08:21:33.968556    8513 pod_ready.go:86] duration metric: took 400.791711ms for pod "kube-proxy-zww5l" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:21:34.168221    8513 pod_ready.go:83] waiting for pod "kube-scheduler-addons-910183" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:21:34.567962    8513 pod_ready.go:94] pod "kube-scheduler-addons-910183" is "Ready"
	I0110 08:21:34.567986    8513 pod_ready.go:86] duration metric: took 399.743015ms for pod "kube-scheduler-addons-910183" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:21:34.567997    8513 pod_ready.go:40] duration metric: took 1.603514456s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 08:21:34.610064    8513 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0110 08:21:34.611778    8513 out.go:179] * Done! kubectl is now configured to use "addons-910183" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 10 08:21:32 addons-910183 crio[771]: time="2026-01-10T08:21:32.118492039Z" level=info msg="Starting container: 52e1025c4414838af6827ca4e9af267b60d7bab029698821e94b8c8c53b47a02" id=bbe9da13-b59b-457e-a1e8-3c8c5bad1f26 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 08:21:32 addons-910183 crio[771]: time="2026-01-10T08:21:32.121312222Z" level=info msg="Started container" PID=6370 containerID=52e1025c4414838af6827ca4e9af267b60d7bab029698821e94b8c8c53b47a02 description=kube-system/csi-hostpathplugin-pmk7s/csi-snapshotter id=bbe9da13-b59b-457e-a1e8-3c8c5bad1f26 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fd2f4f76c777be0d95546cddb5c7ed5667598cbac4c316289042d5980f4be3c6
	Jan 10 08:21:35 addons-910183 crio[771]: time="2026-01-10T08:21:35.419191158Z" level=info msg="Running pod sandbox: default/busybox/POD" id=283f0532-efbc-4a65-b91a-1f58c4293c92 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 08:21:35 addons-910183 crio[771]: time="2026-01-10T08:21:35.419261377Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:21:35 addons-910183 crio[771]: time="2026-01-10T08:21:35.425214024Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6ba2ad8ac7ec0a739f231cf0d10b53805c01fcb4b08e4b0986623670555e4cee UID:5622c18b-bb8e-4a63-9ff1-ce28d7ec8b94 NetNS:/var/run/netns/8fe7508e-a653-4dc4-85b3-c74a97c39970 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00120e2d8}] Aliases:map[]}"
	Jan 10 08:21:35 addons-910183 crio[771]: time="2026-01-10T08:21:35.425239019Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Jan 10 08:21:35 addons-910183 crio[771]: time="2026-01-10T08:21:35.442084865Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:6ba2ad8ac7ec0a739f231cf0d10b53805c01fcb4b08e4b0986623670555e4cee UID:5622c18b-bb8e-4a63-9ff1-ce28d7ec8b94 NetNS:/var/run/netns/8fe7508e-a653-4dc4-85b3-c74a97c39970 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00120e2d8}] Aliases:map[]}"
	Jan 10 08:21:35 addons-910183 crio[771]: time="2026-01-10T08:21:35.442208926Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Jan 10 08:21:35 addons-910183 crio[771]: time="2026-01-10T08:21:35.443166442Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Jan 10 08:21:35 addons-910183 crio[771]: time="2026-01-10T08:21:35.44397864Z" level=info msg="Ran pod sandbox 6ba2ad8ac7ec0a739f231cf0d10b53805c01fcb4b08e4b0986623670555e4cee with infra container: default/busybox/POD" id=283f0532-efbc-4a65-b91a-1f58c4293c92 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 08:21:35 addons-910183 crio[771]: time="2026-01-10T08:21:35.445213955Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=64cf7825-3353-424a-9073-c43af2202e5a name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:21:35 addons-910183 crio[771]: time="2026-01-10T08:21:35.445334105Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=64cf7825-3353-424a-9073-c43af2202e5a name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:21:35 addons-910183 crio[771]: time="2026-01-10T08:21:35.445393396Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=64cf7825-3353-424a-9073-c43af2202e5a name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:21:35 addons-910183 crio[771]: time="2026-01-10T08:21:35.446203464Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f956201b-2b69-4f9b-8afa-5593faf0853e name=/runtime.v1.ImageService/PullImage
	Jan 10 08:21:35 addons-910183 crio[771]: time="2026-01-10T08:21:35.446520282Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Jan 10 08:21:36 addons-910183 crio[771]: time="2026-01-10T08:21:36.573760146Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=f956201b-2b69-4f9b-8afa-5593faf0853e name=/runtime.v1.ImageService/PullImage
	Jan 10 08:21:36 addons-910183 crio[771]: time="2026-01-10T08:21:36.574288993Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b2b50b4d-85fd-4f85-ab71-40586a902272 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:21:36 addons-910183 crio[771]: time="2026-01-10T08:21:36.576116753Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=34a5e4b3-f169-402b-b06a-33c189f2d48d name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:21:36 addons-910183 crio[771]: time="2026-01-10T08:21:36.579663466Z" level=info msg="Creating container: default/busybox/busybox" id=2422fe65-0381-4edc-927e-e2c23735749e name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:21:36 addons-910183 crio[771]: time="2026-01-10T08:21:36.579797176Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:21:36 addons-910183 crio[771]: time="2026-01-10T08:21:36.584675155Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:21:36 addons-910183 crio[771]: time="2026-01-10T08:21:36.585130714Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:21:36 addons-910183 crio[771]: time="2026-01-10T08:21:36.619797634Z" level=info msg="Created container d82b1321d1490e16c88e4c2c2a70bc08db68232ef94c119fbdbcf1df8bd9d9b3: default/busybox/busybox" id=2422fe65-0381-4edc-927e-e2c23735749e name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:21:36 addons-910183 crio[771]: time="2026-01-10T08:21:36.620375651Z" level=info msg="Starting container: d82b1321d1490e16c88e4c2c2a70bc08db68232ef94c119fbdbcf1df8bd9d9b3" id=0517a2fb-0c1b-4af2-8804-d0d01015df81 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 08:21:36 addons-910183 crio[771]: time="2026-01-10T08:21:36.622089246Z" level=info msg="Started container" PID=6476 containerID=d82b1321d1490e16c88e4c2c2a70bc08db68232ef94c119fbdbcf1df8bd9d9b3 description=default/busybox/busybox id=0517a2fb-0c1b-4af2-8804-d0d01015df81 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6ba2ad8ac7ec0a739f231cf0d10b53805c01fcb4b08e4b0986623670555e4cee
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	d82b1321d1490       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          7 seconds ago        Running             busybox                                  0                   6ba2ad8ac7ec0       busybox                                     default
	52e1025c44148       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          11 seconds ago       Running             csi-snapshotter                          0                   fd2f4f76c777b       csi-hostpathplugin-pmk7s                    kube-system
	75942fc7d4719       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          12 seconds ago       Running             csi-provisioner                          0                   fd2f4f76c777b       csi-hostpathplugin-pmk7s                    kube-system
	1f0e0366214d1       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            13 seconds ago       Running             liveness-probe                           0                   fd2f4f76c777b       csi-hostpathplugin-pmk7s                    kube-system
	ad41fe8eb72ea       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           14 seconds ago       Running             hostpath                                 0                   fd2f4f76c777b       csi-hostpathplugin-pmk7s                    kube-system
	b8196c6709973       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                15 seconds ago       Running             node-driver-registrar                    0                   fd2f4f76c777b       csi-hostpathplugin-pmk7s                    kube-system
	b070a740cb61b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 15 seconds ago       Running             gcp-auth                                 0                   f6a7db941afb9       gcp-auth-5bbcf684b5-vnx2x                   gcp-auth
	63aa7a14205bf       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             17 seconds ago       Running             controller                               0                   1cde0cb4ee9a5       ingress-nginx-controller-7847b5c79c-6mqqd   ingress-nginx
	67c4ac07287c9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   21 seconds ago       Exited              patch                                    1                   cea9cb50bebda       ingress-nginx-admission-patch-w5xx5         ingress-nginx
	471ab7a71d49f       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ec62224230f19a2cf1dfa480d6b31c048eaa365d192ceb554d2de6304e938d8c                            21 seconds ago       Running             gadget                                   0                   53f3f9fc7eccb       gadget-kjbpw                                gadget
	e283b96a0148a       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              23 seconds ago       Running             registry-proxy                           0                   7a94191f82df5       registry-proxy-zvnnr                        kube-system
	a3befa7150ca5       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   25 seconds ago       Running             csi-external-health-monitor-controller   0                   fd2f4f76c777b       csi-hostpathplugin-pmk7s                    kube-system
	ba1fcf19b0afc       nvcr.io/nvidia/k8s-device-plugin@sha256:c3c1a099015d1810c249ba294beaad656ce0354f7e8a77803dacabe60a4f8c9f                                     26 seconds ago       Running             nvidia-device-plugin-ctr                 0                   02dde9a19ddf8       nvidia-device-plugin-daemonset-cr698        kube-system
	91869078ce48e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   29 seconds ago       Exited              patch                                    0                   c8e98c0fa84b9       gcp-auth-certs-patch-2nbbt                  gcp-auth
	e315fa8c4e521       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      29 seconds ago       Running             volume-snapshot-controller               0                   1dfb3f12d915c       snapshot-controller-6588d87457-fgv47        kube-system
	60d2da1df937e       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     29 seconds ago       Running             amd-gpu-device-plugin                    0                   04881cd687010       amd-gpu-device-plugin-8zrhw                 kube-system
	a26f210429ce8       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      31 seconds ago       Running             volume-snapshot-controller               0                   9a3fb9ca3591c       snapshot-controller-6588d87457-vgbtz        kube-system
	e2d11d137973f       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             32 seconds ago       Running             local-path-provisioner                   0                   6d369dd99d10f       local-path-provisioner-c44bcd496-n6slq      local-path-storage
	ac0bb3017c2df       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              32 seconds ago       Running             csi-resizer                              0                   d3993aa52b60e       csi-hostpath-resizer-0                      kube-system
	b38b613cf2218       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   33 seconds ago       Exited              create                                   0                   298f40af6ec6e       gcp-auth-certs-create-7ncj9                 gcp-auth
	80376415781ce       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             33 seconds ago       Running             csi-attacher                             0                   fd3baaff0f367       csi-hostpath-attacher-0                     kube-system
	682bed7b07e1e       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   34 seconds ago       Exited              create                                   0                   d1fedee89720b       ingress-nginx-admission-create-lqgsl        ingress-nginx
	ad2929c9cecab       ghcr.io/manusa/yakd@sha256:45d2fe163841511e351ae36a5e434fb854a886b0d6a70cea692bd707543fd8c6                                                  35 seconds ago       Running             yakd                                     0                   3645ecfa564ab       yakd-dashboard-7bcf5795cd-g7g6p             yakd-dashboard
	f180f3ed8bb64       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        38 seconds ago       Running             metrics-server                           0                   8e85f380b7e7c       metrics-server-5778bb4788-228dp             kube-system
	bbf25263f21e8       gcr.io/cloud-spanner-emulator/emulator@sha256:b948b04b45496ebeb13eee27bc9d238593c142e8e010443892153f181591abde                               39 seconds ago       Running             cloud-spanner-emulator                   0                   0db704731e119       cloud-spanner-emulator-5649ccbc87-cln6m     default
	2149b85c56959       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           42 seconds ago       Running             registry                                 0                   d8e652625e277       registry-788cd7d5bc-49v9n                   kube-system
	82bfb461b79f2       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               43 seconds ago       Running             minikube-ingress-dns                     0                   df17c3dc7e614       kube-ingress-dns-minikube                   kube-system
	800ef08a84703       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             47 seconds ago       Running             storage-provisioner                      0                   8f3534c00a0ea       storage-provisioner                         kube-system
	cc65f89692fad       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                                                             47 seconds ago       Running             coredns                                  0                   e38e41a13cce7       coredns-7d764666f9-qcg8s                    kube-system
	c98446ea3c7df       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27                                           58 seconds ago       Running             kindnet-cni                              0                   09dc704f0ee31       kindnet-nz7j6                               kube-system
	5f8174e666e70       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                                                             About a minute ago   Running             kube-proxy                               0                   a3f92de3dbc29       kube-proxy-zww5l                            kube-system
	273ea06f61975       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                                                             About a minute ago   Running             kube-scheduler                           0                   4f79cfabae5d0       kube-scheduler-addons-910183                kube-system
	5394108065d3c       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                                                             About a minute ago   Running             kube-controller-manager                  0                   18ce4666d2e04       kube-controller-manager-addons-910183       kube-system
	b80c4916fca96       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                                                             About a minute ago   Running             kube-apiserver                           0                   45a7539dba2ff       kube-apiserver-addons-910183                kube-system
	f6fa6e8d4ac06       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                                                             About a minute ago   Running             etcd                                     0                   766fbec68b53e       etcd-addons-910183                          kube-system
	
	
	==> coredns [cc65f89692fad8f8f11138c2f42e6bc16e264609f7cf63fd4ef0448bcfbcd8a9] <==
	[INFO] 10.244.0.18:60677 - 5947 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000158409s
	[INFO] 10.244.0.18:48518 - 10664 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000099359s
	[INFO] 10.244.0.18:48518 - 10360 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000147358s
	[INFO] 10.244.0.18:56451 - 56758 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.00008175s
	[INFO] 10.244.0.18:56451 - 56423 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000109988s
	[INFO] 10.244.0.18:51516 - 53950 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000077293s
	[INFO] 10.244.0.18:51516 - 54196 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.00014691s
	[INFO] 10.244.0.18:36018 - 3025 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000053068s
	[INFO] 10.244.0.18:36018 - 2742 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000100015s
	[INFO] 10.244.0.18:33159 - 13280 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000093407s
	[INFO] 10.244.0.18:33159 - 13043 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000104716s
	[INFO] 10.244.0.21:34143 - 900 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00015023s
	[INFO] 10.244.0.21:34893 - 32003 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000215556s
	[INFO] 10.244.0.21:56171 - 5884 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000143239s
	[INFO] 10.244.0.21:45270 - 12499 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00018452s
	[INFO] 10.244.0.21:41493 - 40171 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000119991s
	[INFO] 10.244.0.21:53800 - 65107 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000155553s
	[INFO] 10.244.0.21:41989 - 9635 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.005315317s
	[INFO] 10.244.0.21:41909 - 32606 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.006582184s
	[INFO] 10.244.0.21:58454 - 56700 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005002057s
	[INFO] 10.244.0.21:46679 - 1512 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005880221s
	[INFO] 10.244.0.21:40002 - 11705 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004405592s
	[INFO] 10.244.0.21:35215 - 47903 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005394042s
	[INFO] 10.244.0.21:41341 - 54935 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000782626s
	[INFO] 10.244.0.21:47018 - 53215 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.00112928s
	
	
	==> describe nodes <==
	Name:               addons-910183
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-910183
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee
	                    minikube.k8s.io/name=addons-910183
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T08_20_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-910183
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-910183"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 08:20:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-910183
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 08:21:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 08:21:38 +0000   Sat, 10 Jan 2026 08:20:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 08:21:38 +0000   Sat, 10 Jan 2026 08:20:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 08:21:38 +0000   Sat, 10 Jan 2026 08:20:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 08:21:38 +0000   Sat, 10 Jan 2026 08:20:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-910183
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 73d0d7ed7bc783a051b563196960b035
	  System UUID:                82c1d768-2f1a-4881-8308-15f89113b8d9
	  Boot ID:                    7c12f492-59b6-440b-986c-741ece916a23
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  default                     cloud-spanner-emulator-5649ccbc87-cln6m      0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  gadget                      gadget-kjbpw                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  gcp-auth                    gcp-auth-5bbcf684b5-vnx2x                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  ingress-nginx               ingress-nginx-controller-7847b5c79c-6mqqd    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         60s
	  kube-system                 amd-gpu-device-plugin-8zrhw                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 coredns-7d764666f9-qcg8s                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     61s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 csi-hostpathplugin-pmk7s                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 etcd-addons-910183                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         67s
	  kube-system                 kindnet-nz7j6                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      61s
	  kube-system                 kube-apiserver-addons-910183                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         67s
	  kube-system                 kube-controller-manager-addons-910183        200m (2%)     0 (0%)      0 (0%)           0 (0%)         67s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-zww5l                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-scheduler-addons-910183                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         67s
	  kube-system                 metrics-server-5778bb4788-228dp              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         60s
	  kube-system                 nvidia-device-plugin-daemonset-cr698         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 registry-788cd7d5bc-49v9n                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 registry-creds-567fb78d95-6qnq9              0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 registry-proxy-zvnnr                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 snapshot-controller-6588d87457-fgv47         0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 snapshot-controller-6588d87457-vgbtz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  local-path-storage          local-path-provisioner-c44bcd496-n6slq       0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  yakd-dashboard              yakd-dashboard-7bcf5795cd-g7g6p              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     60s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  62s   node-controller  Node addons-910183 event: Registered Node addons-910183 in Controller
	
	
	==> dmesg <==
	[Jan10 08:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001659] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.404004] i8042: Warning: Keylock active
	[  +0.021255] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.508728] block sda: the capability attribute has been deprecated.
	[  +0.091638] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026443] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.290756] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [f6fa6e8d4ac06e107cc11e058aaccfc1f58cc2d3bde427b80777917f1c523209] <==
	{"level":"info","ts":"2026-01-10T08:20:34.411836Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T08:20:34.411932Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T08:20:34.411957Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T08:20:34.412721Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T08:20:34.412883Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T08:20:34.412977Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T08:20:34.413014Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T08:20:34.413051Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-10T08:20:34.413122Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2026-01-10T08:20:34.413760Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T08:20:34.414847Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T08:20:34.415526Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"warn","ts":"2026-01-10T08:21:06.968666Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"109.671018ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/extension-apiserver-authentication\" limit:1 ","response":"range_response_count:1 size:3021"}
	{"level":"info","ts":"2026-01-10T08:21:06.968781Z","caller":"traceutil/trace.go:172","msg":"trace[1992301095] range","detail":"{range_begin:/registry/configmaps/kube-system/extension-apiserver-authentication; range_end:; response_count:1; response_revision:984; }","duration":"109.821132ms","start":"2026-01-10T08:21:06.858944Z","end":"2026-01-10T08:21:06.968765Z","steps":["trace[1992301095] 'range keys from in-memory index tree'  (duration: 109.536632ms)"],"step_count":1}
	{"level":"warn","ts":"2026-01-10T08:21:07.150239Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"111.401711ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2026-01-10T08:21:07.150302Z","caller":"traceutil/trace.go:172","msg":"trace[1053413871] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:986; }","duration":"111.478016ms","start":"2026-01-10T08:21:07.038811Z","end":"2026-01-10T08:21:07.150289Z","steps":["trace[1053413871] 'agreement among raft nodes before linearized reading'  (duration: 111.366126ms)"],"step_count":1}
	{"level":"info","ts":"2026-01-10T08:21:07.150309Z","caller":"traceutil/trace.go:172","msg":"trace[150807217] transaction","detail":"{read_only:false; response_revision:988; number_of_response:1; }","duration":"132.454377ms","start":"2026-01-10T08:21:07.017847Z","end":"2026-01-10T08:21:07.150301Z","steps":["trace[150807217] 'process raft request'  (duration: 132.411494ms)"],"step_count":1}
	{"level":"info","ts":"2026-01-10T08:21:07.150362Z","caller":"traceutil/trace.go:172","msg":"trace[608545141] transaction","detail":"{read_only:false; response_revision:987; number_of_response:1; }","duration":"133.046548ms","start":"2026-01-10T08:21:07.017299Z","end":"2026-01-10T08:21:07.150345Z","steps":["trace[608545141] 'process raft request'  (duration: 132.849911ms)"],"step_count":1}
	{"level":"info","ts":"2026-01-10T08:21:07.273199Z","caller":"traceutil/trace.go:172","msg":"trace[631319801] transaction","detail":"{read_only:false; response_revision:989; number_of_response:1; }","duration":"159.034912ms","start":"2026-01-10T08:21:07.114145Z","end":"2026-01-10T08:21:07.273180Z","steps":["trace[631319801] 'process raft request'  (duration: 156.797985ms)"],"step_count":1}
	{"level":"info","ts":"2026-01-10T08:21:07.273287Z","caller":"traceutil/trace.go:172","msg":"trace[1961193197] transaction","detail":"{read_only:false; response_revision:990; number_of_response:1; }","duration":"118.430349ms","start":"2026-01-10T08:21:07.154845Z","end":"2026-01-10T08:21:07.273275Z","steps":["trace[1961193197] 'process raft request'  (duration: 118.363392ms)"],"step_count":1}
	{"level":"info","ts":"2026-01-10T08:21:09.850582Z","caller":"traceutil/trace.go:172","msg":"trace[539653445] transaction","detail":"{read_only:false; response_revision:1007; number_of_response:1; }","duration":"113.032004ms","start":"2026-01-10T08:21:09.737536Z","end":"2026-01-10T08:21:09.850568Z","steps":["trace[539653445] 'process raft request'  (duration: 112.931632ms)"],"step_count":1}
	{"level":"warn","ts":"2026-01-10T08:21:10.091292Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"135.722467ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2026-01-10T08:21:10.091362Z","caller":"traceutil/trace.go:172","msg":"trace[226174836] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1007; }","duration":"135.808501ms","start":"2026-01-10T08:21:09.955540Z","end":"2026-01-10T08:21:10.091349Z","steps":["trace[226174836] 'range keys from in-memory index tree'  (duration: 133.528779ms)"],"step_count":1}
	{"level":"warn","ts":"2026-01-10T08:21:10.092008Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.62567ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128042569028151795 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-w5xx5\" mod_revision:856 > success:<request_put:<key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-w5xx5\" value_size:4944 >> failure:<request_range:<key:\"/registry/pods/ingress-nginx/ingress-nginx-admission-patch-w5xx5\" > >>","response":"size:16"}
	{"level":"info","ts":"2026-01-10T08:21:10.092117Z","caller":"traceutil/trace.go:172","msg":"trace[883665900] transaction","detail":"{read_only:false; response_revision:1008; number_of_response:1; }","duration":"234.611117ms","start":"2026-01-10T08:21:09.857495Z","end":"2026-01-10T08:21:10.092106Z","steps":["trace[883665900] 'process raft request'  (duration: 100.273175ms)","trace[883665900] 'compare'  (duration: 133.454629ms)"],"step_count":2}
	
	
	==> gcp-auth [b070a740cb61b48d70da143053d80362b41b6a623889dec11b0459e81ec04621] <==
	2026/01/10 08:21:28 GCP Auth Webhook started!
	2026/01/10 08:21:34 Ready to marshal response ...
	2026/01/10 08:21:34 Ready to write response ...
	2026/01/10 08:21:35 Ready to marshal response ...
	2026/01/10 08:21:35 Ready to write response ...
	2026/01/10 08:21:35 Ready to marshal response ...
	2026/01/10 08:21:35 Ready to write response ...
	
	
	==> kernel <==
	 08:21:44 up 4 min,  0 user,  load average: 2.83, 1.25, 0.46
	Linux addons-910183 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c98446ea3c7dfb0a00df0216bc2d759bfb2abd9423c844e37dd879a9f1842ce2] <==
	I0110 08:20:45.761044       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 08:20:45.761334       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0110 08:20:45.761442       1 main.go:148] setting mtu 1500 for CNI 
	I0110 08:20:45.761461       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 08:20:45.761478       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T08:20:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 08:20:45.965300       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 08:20:45.965366       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 08:20:45.965379       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 08:20:45.966225       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 08:20:46.359286       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 08:20:46.359316       1 metrics.go:72] Registering metrics
	I0110 08:20:46.359393       1 controller.go:711] "Syncing nftables rules"
	I0110 08:20:55.966842       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0110 08:20:55.966907       1 main.go:301] handling current node
	I0110 08:21:05.965301       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0110 08:21:05.965366       1 main.go:301] handling current node
	I0110 08:21:15.965772       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0110 08:21:15.965826       1 main.go:301] handling current node
	I0110 08:21:25.965497       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0110 08:21:25.965553       1 main.go:301] handling current node
	I0110 08:21:35.966227       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0110 08:21:35.966264       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b80c4916fca961561a4c4f5172bd77ec246690c99788e26501b0d53f2be91145] <==
	W0110 08:20:56.203082       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.189.125:443: connect: connection refused
	E0110 08:20:56.203115       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.189.125:443: connect: connection refused" logger="UnhandledError"
	W0110 08:20:56.220578       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.189.125:443: connect: connection refused
	E0110 08:20:56.220613       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.189.125:443: connect: connection refused" logger="UnhandledError"
	W0110 08:20:56.225261       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.102.189.125:443: connect: connection refused
	E0110 08:20:56.225300       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.102.189.125:443: connect: connection refused" logger="UnhandledError"
	W0110 08:21:07.274040       1 handler_proxy.go:99] no RequestInfo found in the context
	E0110 08:21:07.274139       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E0110 08:21:07.274408       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.134.31:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.134.31:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.134.31:443: connect: connection refused" logger="UnhandledError"
	E0110 08:21:07.275972       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.134.31:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.134.31:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.134.31:443: connect: connection refused" logger="UnhandledError"
	E0110 08:21:07.281954       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.134.31:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.134.31:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.134.31:443: connect: connection refused" logger="UnhandledError"
	E0110 08:21:07.302641       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.134.31:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.134.31:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.134.31:443: connect: connection refused" logger="UnhandledError"
	E0110 08:21:07.343930       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.134.31:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.134.31:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.134.31:443: connect: connection refused" logger="UnhandledError"
	E0110 08:21:07.424813       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.134.31:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.134.31:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.134.31:443: connect: connection refused" logger="UnhandledError"
	E0110 08:21:07.586043       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.134.31:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.134.31:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.134.31:443: connect: connection refused" logger="UnhandledError"
	I0110 08:21:07.939463       1 handler.go:304] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0110 08:21:12.123051       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0110 08:21:12.131790       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W0110 08:21:12.224429       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0110 08:21:12.232787       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E0110 08:21:42.253818       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:49614: use of closed network connection
	E0110 08:21:42.390931       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:49646: use of closed network connection
	
	
	==> kube-controller-manager [5394108065d3cfd7430eab4005c30e45dbab412c8e88c64087470bc4bdf68d94] <==
	I0110 08:20:42.103848       1 shared_informer.go:377] "Caches are synced"
	I0110 08:20:42.103909       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="addons-910183"
	I0110 08:20:42.103954       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0110 08:20:42.103962       1 shared_informer.go:377] "Caches are synced"
	I0110 08:20:42.103982       1 shared_informer.go:377] "Caches are synced"
	I0110 08:20:42.103991       1 shared_informer.go:377] "Caches are synced"
	I0110 08:20:42.104094       1 shared_informer.go:377] "Caches are synced"
	I0110 08:20:42.104129       1 shared_informer.go:377] "Caches are synced"
	I0110 08:20:42.104149       1 shared_informer.go:377] "Caches are synced"
	I0110 08:20:42.104303       1 shared_informer.go:377] "Caches are synced"
	I0110 08:20:42.104351       1 shared_informer.go:377] "Caches are synced"
	I0110 08:20:42.107705       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:20:42.109225       1 shared_informer.go:377] "Caches are synced"
	I0110 08:20:42.111140       1 range_allocator.go:433] "Set node PodCIDR" node="addons-910183" podCIDRs=["10.244.0.0/24"]
	I0110 08:20:42.203285       1 shared_informer.go:377] "Caches are synced"
	I0110 08:20:42.203307       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 08:20:42.203314       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 08:20:42.208665       1 shared_informer.go:377] "Caches are synced"
	E0110 08:20:44.703823       1 replica_set.go:592] "Unhandled Error" err="sync \"kube-system/metrics-server-5778bb4788\" failed with pods \"metrics-server-5778bb4788-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I0110 08:20:57.105968       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0110 08:21:12.116140       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="volumesnapshots.snapshot.storage.k8s.io"
	I0110 08:21:12.116253       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:21:12.216424       1 shared_informer.go:377] "Caches are synced"
	I0110 08:21:12.218136       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:21:12.318543       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [5f8174e666e70787ef47924deb832bd181b1187a4f93adde2719afd222583233] <==
	I0110 08:20:44.049984       1 server_linux.go:53] "Using iptables proxy"
	I0110 08:20:44.386236       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:20:44.491080       1 shared_informer.go:377] "Caches are synced"
	I0110 08:20:44.491134       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0110 08:20:44.491230       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 08:20:44.642044       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 08:20:44.642271       1 server_linux.go:136] "Using iptables Proxier"
	I0110 08:20:44.656457       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 08:20:44.663774       1 server.go:529] "Version info" version="v1.35.0"
	I0110 08:20:44.663867       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 08:20:44.679833       1 config.go:200] "Starting service config controller"
	I0110 08:20:44.679859       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 08:20:44.680015       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 08:20:44.680040       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 08:20:44.680060       1 config.go:106] "Starting endpoint slice config controller"
	I0110 08:20:44.680065       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 08:20:44.681947       1 config.go:309] "Starting node config controller"
	I0110 08:20:44.681967       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 08:20:44.681974       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 08:20:44.781336       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 08:20:44.781490       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0110 08:20:44.781601       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [273ea06f61975e2f11258602b200a4bc83fa37bbfdf217550f9a076abce1b87f] <==
	E0110 08:20:35.311073       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0110 08:20:35.311081       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0110 08:20:35.311127       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0110 08:20:35.311135       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0110 08:20:35.311191       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0110 08:20:35.311182       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0110 08:20:35.311245       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0110 08:20:35.311368       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0110 08:20:35.311397       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 08:20:35.311394       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0110 08:20:35.311411       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0110 08:20:35.311412       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0110 08:20:35.311443       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0110 08:20:36.135635       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0110 08:20:36.138196       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0110 08:20:36.150878       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0110 08:20:36.155389       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 08:20:36.163013       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E0110 08:20:36.211097       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0110 08:20:36.240147       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0110 08:20:36.240427       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0110 08:20:36.304898       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0110 08:20:36.338982       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0110 08:20:36.391206       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	I0110 08:20:38.506012       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 08:21:23 addons-910183 kubelet[1271]: I0110 08:21:23.911208    1271 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="gadget/gadget-kjbpw" podStartSLOduration=18.708857171 podStartE2EDuration="39.911189813s" podCreationTimestamp="2026-01-10 08:20:44 +0000 UTC" firstStartedPulling="2026-01-10 08:21:00.821143937 +0000 UTC m=+23.309351337" lastFinishedPulling="2026-01-10 08:21:22.023476585 +0000 UTC m=+44.511683979" observedRunningTime="2026-01-10 08:21:22.84296711 +0000 UTC m=+45.331174521" watchObservedRunningTime="2026-01-10 08:21:23.911189813 +0000 UTC m=+46.399397222"
	Jan 10 08:21:24 addons-910183 kubelet[1271]: I0110 08:21:24.460965    1271 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/adb13024-c396-426a-a989-3a9fcd3b02ac-kube-api-access-8r2wd\" (UniqueName: \"kubernetes.io/projected/adb13024-c396-426a-a989-3a9fcd3b02ac-kube-api-access-8r2wd\") pod \"adb13024-c396-426a-a989-3a9fcd3b02ac\" (UID: \"adb13024-c396-426a-a989-3a9fcd3b02ac\") "
	Jan 10 08:21:24 addons-910183 kubelet[1271]: I0110 08:21:24.463372    1271 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adb13024-c396-426a-a989-3a9fcd3b02ac-kube-api-access-8r2wd" pod "adb13024-c396-426a-a989-3a9fcd3b02ac" (UID: "adb13024-c396-426a-a989-3a9fcd3b02ac"). InnerVolumeSpecName "kube-api-access-8r2wd". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Jan 10 08:21:24 addons-910183 kubelet[1271]: I0110 08:21:24.561917    1271 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8r2wd\" (UniqueName: \"kubernetes.io/projected/adb13024-c396-426a-a989-3a9fcd3b02ac-kube-api-access-8r2wd\") on node \"addons-910183\" DevicePath \"\""
	Jan 10 08:21:24 addons-910183 kubelet[1271]: I0110 08:21:24.819368    1271 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cea9cb50bebda800a008edd31112ac7382b0aa3a563d0032b397508cad7dbc66"
	Jan 10 08:21:25 addons-910183 kubelet[1271]: E0110 08:21:25.792501    1271 prober_manager.go:197] "Startup probe already exists for container" pod="gadget/gadget-kjbpw" containerName="gadget"
	Jan 10 08:21:25 addons-910183 kubelet[1271]: E0110 08:21:25.822441    1271 prober_manager.go:197] "Startup probe already exists for container" pod="gadget/gadget-kjbpw" containerName="gadget"
	Jan 10 08:21:26 addons-910183 kubelet[1271]: E0110 08:21:26.827167    1271 prober_manager.go:209] "Readiness probe already exists for container" pod="ingress-nginx/ingress-nginx-controller-7847b5c79c-6mqqd" containerName="controller"
	Jan 10 08:21:26 addons-910183 kubelet[1271]: E0110 08:21:26.827351    1271 prober_manager.go:197] "Startup probe already exists for container" pod="gadget/gadget-kjbpw" containerName="gadget"
	Jan 10 08:21:26 addons-910183 kubelet[1271]: I0110 08:21:26.837948    1271 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-7847b5c79c-6mqqd" podStartSLOduration=28.610881992 podStartE2EDuration="42.837929977s" podCreationTimestamp="2026-01-10 08:20:44 +0000 UTC" firstStartedPulling="2026-01-10 08:21:12.143215435 +0000 UTC m=+34.631422838" lastFinishedPulling="2026-01-10 08:21:26.370263424 +0000 UTC m=+48.858470823" observedRunningTime="2026-01-10 08:21:26.837491808 +0000 UTC m=+49.325699229" watchObservedRunningTime="2026-01-10 08:21:26.837929977 +0000 UTC m=+49.326137387"
	Jan 10 08:21:27 addons-910183 kubelet[1271]: E0110 08:21:27.831664    1271 prober_manager.go:197] "Startup probe already exists for container" pod="gadget/gadget-kjbpw" containerName="gadget"
	Jan 10 08:21:27 addons-910183 kubelet[1271]: E0110 08:21:27.831837    1271 prober_manager.go:209] "Readiness probe already exists for container" pod="ingress-nginx/ingress-nginx-controller-7847b5c79c-6mqqd" containerName="controller"
	Jan 10 08:21:28 addons-910183 kubelet[1271]: E0110 08:21:28.090993    1271 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Jan 10 08:21:28 addons-910183 kubelet[1271]: E0110 08:21:28.091093    1271 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03397d64-2905-4e30-b560-552bbbed8823-gcr-creds podName:03397d64-2905-4e30-b560-552bbbed8823 nodeName:}" failed. No retries permitted until 2026-01-10 08:22:00.091073578 +0000 UTC m=+82.579280966 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/03397d64-2905-4e30-b560-552bbbed8823-gcr-creds") pod "registry-creds-567fb78d95-6qnq9" (UID: "03397d64-2905-4e30-b560-552bbbed8823") : secret "registry-creds-gcr" not found
	Jan 10 08:21:28 addons-910183 kubelet[1271]: I0110 08:21:28.849300    1271 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="gcp-auth/gcp-auth-5bbcf684b5-vnx2x" podStartSLOduration=21.865050116 podStartE2EDuration="37.849282334s" podCreationTimestamp="2026-01-10 08:20:51 +0000 UTC" firstStartedPulling="2026-01-10 08:21:12.188912488 +0000 UTC m=+34.677119888" lastFinishedPulling="2026-01-10 08:21:28.1731447 +0000 UTC m=+50.661352106" observedRunningTime="2026-01-10 08:21:28.848675646 +0000 UTC m=+51.336883064" watchObservedRunningTime="2026-01-10 08:21:28.849282334 +0000 UTC m=+51.337489743"
	Jan 10 08:21:30 addons-910183 kubelet[1271]: I0110 08:21:30.626304    1271 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Jan 10 08:21:30 addons-910183 kubelet[1271]: I0110 08:21:30.626347    1271 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Jan 10 08:21:32 addons-910183 kubelet[1271]: E0110 08:21:32.867539    1271 prober_manager.go:221] "Liveness probe already exists for container" pod="kube-system/csi-hostpathplugin-pmk7s" containerName="hostpath"
	Jan 10 08:21:32 addons-910183 kubelet[1271]: I0110 08:21:32.879951    1271 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-pmk7s" podStartSLOduration=1.474979557 podStartE2EDuration="36.879931365s" podCreationTimestamp="2026-01-10 08:20:56 +0000 UTC" firstStartedPulling="2026-01-10 08:20:56.666714645 +0000 UTC m=+19.154922033" lastFinishedPulling="2026-01-10 08:21:32.071666453 +0000 UTC m=+54.559873841" observedRunningTime="2026-01-10 08:21:32.878654608 +0000 UTC m=+55.366862028" watchObservedRunningTime="2026-01-10 08:21:32.879931365 +0000 UTC m=+55.368138774"
	Jan 10 08:21:33 addons-910183 kubelet[1271]: E0110 08:21:33.871241    1271 prober_manager.go:221] "Liveness probe already exists for container" pod="kube-system/csi-hostpathplugin-pmk7s" containerName="hostpath"
	Jan 10 08:21:35 addons-910183 kubelet[1271]: I0110 08:21:35.243000    1271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp4tb\" (UniqueName: \"kubernetes.io/projected/5622c18b-bb8e-4a63-9ff1-ce28d7ec8b94-kube-api-access-lp4tb\") pod \"busybox\" (UID: \"5622c18b-bb8e-4a63-9ff1-ce28d7ec8b94\") " pod="default/busybox"
	Jan 10 08:21:35 addons-910183 kubelet[1271]: I0110 08:21:35.243038    1271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5622c18b-bb8e-4a63-9ff1-ce28d7ec8b94-gcp-creds\") pod \"busybox\" (UID: \"5622c18b-bb8e-4a63-9ff1-ce28d7ec8b94\") " pod="default/busybox"
	Jan 10 08:21:36 addons-910183 kubelet[1271]: I0110 08:21:36.902182    1271 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.772786337 podStartE2EDuration="1.902152929s" podCreationTimestamp="2026-01-10 08:21:35 +0000 UTC" firstStartedPulling="2026-01-10 08:21:35.445875055 +0000 UTC m=+57.934082457" lastFinishedPulling="2026-01-10 08:21:36.575241648 +0000 UTC m=+59.063449049" observedRunningTime="2026-01-10 08:21:36.900644141 +0000 UTC m=+59.388851552" watchObservedRunningTime="2026-01-10 08:21:36.902152929 +0000 UTC m=+59.390360338"
	Jan 10 08:21:37 addons-910183 kubelet[1271]: E0110 08:21:37.834127    1271 prober_manager.go:209] "Readiness probe already exists for container" pod="ingress-nginx/ingress-nginx-controller-7847b5c79c-6mqqd" containerName="controller"
	Jan 10 08:21:43 addons-910183 kubelet[1271]: I0110 08:21:43.594975    1271 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e8b313e6-9bea-4851-a2c6-c55feb254853" path="/var/lib/kubelet/pods/e8b313e6-9bea-4851-a2c6-c55feb254853/volumes"
	
	
	==> storage-provisioner [800ef08a84703f203b5e10eeaa60cc96e4e0b501f7116ab326aec663a8d44a0a] <==
	W0110 08:21:18.807599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:21:20.810619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:21:20.813925       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:21:22.818581       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:21:22.828930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:21:24.832232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:21:24.837160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:21:26.840329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:21:26.845356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:21:28.848102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:21:28.852190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:21:30.858320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:21:30.863935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:21:32.867424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:21:32.872613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:21:34.875621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:21:34.879430       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:21:36.884176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:21:36.888502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:21:38.891863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:21:38.896105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:21:40.899157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:21:40.904121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:21:42.907698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:21:42.912161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-910183 -n addons-910183
helpers_test.go:270: (dbg) Run:  kubectl --context addons-910183 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: gcp-auth-certs-patch-2nbbt ingress-nginx-admission-create-lqgsl ingress-nginx-admission-patch-w5xx5 registry-creds-567fb78d95-6qnq9
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-910183 describe pod gcp-auth-certs-patch-2nbbt ingress-nginx-admission-create-lqgsl ingress-nginx-admission-patch-w5xx5 registry-creds-567fb78d95-6qnq9
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-910183 describe pod gcp-auth-certs-patch-2nbbt ingress-nginx-admission-create-lqgsl ingress-nginx-admission-patch-w5xx5 registry-creds-567fb78d95-6qnq9: exit status 1 (57.360031ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-patch-2nbbt" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-lqgsl" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-w5xx5" not found
	Error from server (NotFound): pods "registry-creds-567fb78d95-6qnq9" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-910183 describe pod gcp-auth-certs-patch-2nbbt ingress-nginx-admission-create-lqgsl ingress-nginx-admission-patch-w5xx5 registry-creds-567fb78d95-6qnq9: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-910183 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-910183 addons disable headlamp --alsologtostderr -v=1: exit status 11 (234.391528ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:21:44.908650   17984 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:21:44.908954   17984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:21:44.908964   17984 out.go:374] Setting ErrFile to fd 2...
	I0110 08:21:44.908969   17984 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:21:44.909148   17984 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:21:44.909405   17984 mustload.go:66] Loading cluster: addons-910183
	I0110 08:21:44.909712   17984 config.go:182] Loaded profile config "addons-910183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:21:44.909725   17984 addons.go:622] checking whether the cluster is paused
	I0110 08:21:44.909818   17984 config.go:182] Loaded profile config "addons-910183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:21:44.909830   17984 host.go:66] Checking if "addons-910183" exists ...
	I0110 08:21:44.910189   17984 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Status}}
	I0110 08:21:44.929324   17984 ssh_runner.go:195] Run: systemctl --version
	I0110 08:21:44.929382   17984 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:21:44.947600   17984 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa Username:docker}
	I0110 08:21:45.040437   17984 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 08:21:45.040528   17984 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 08:21:45.069351   17984 cri.go:96] found id: "52e1025c4414838af6827ca4e9af267b60d7bab029698821e94b8c8c53b47a02"
	I0110 08:21:45.069381   17984 cri.go:96] found id: "75942fc7d47191bb325503f8137ff909131d1c4d52d50d6647a07624163824bf"
	I0110 08:21:45.069385   17984 cri.go:96] found id: "1f0e0366214d13effb5e1a2fdd2281ce1ab7895177c2879232a660b16f2f36f6"
	I0110 08:21:45.069389   17984 cri.go:96] found id: "ad41fe8eb72ea17b3787fd8b9148b6f74bcbae10876696c1004d2af9067e974a"
	I0110 08:21:45.069392   17984 cri.go:96] found id: "b8196c6709973cd145e81bee513c9295c75a07f339edd1849b8f09eeca3a8902"
	I0110 08:21:45.069396   17984 cri.go:96] found id: "e283b96a0148a542df9ef8145310402e66db65dde0eeb3d91a74bffd800c43b6"
	I0110 08:21:45.069399   17984 cri.go:96] found id: "a3befa7150ca579269272406feaeddcf602e63b5aa269f6a4eb71f0400cf9f65"
	I0110 08:21:45.069402   17984 cri.go:96] found id: "ba1fcf19b0afca5f042f84fda8060304660e32ffcb595b679580f3f1681e8b1b"
	I0110 08:21:45.069405   17984 cri.go:96] found id: "e315fa8c4e521b6643d8334e448d62113f1d082439e58e1761e61d37e8c0adab"
	I0110 08:21:45.069417   17984 cri.go:96] found id: "60d2da1df937e39c979adac7502a680da7554008aebe24a90289fa081d48066c"
	I0110 08:21:45.069422   17984 cri.go:96] found id: "a26f210429ce8f81957c43d13ae7dc0c7795a1237b391a18c23dc21ee3dac83a"
	I0110 08:21:45.069426   17984 cri.go:96] found id: "ac0bb3017c2df5407dc70b2065a8642f17f4632f0eb519eae225e58db88cb7d7"
	I0110 08:21:45.069430   17984 cri.go:96] found id: "80376415781ced2a18f3b49817b93eb4bbe7ad36f4493427b2dcf4382ff8817b"
	I0110 08:21:45.069435   17984 cri.go:96] found id: "f180f3ed8bb6409a8995534d80f2ad8dbca483a62f67c39d69e4883e5841f7b7"
	I0110 08:21:45.069439   17984 cri.go:96] found id: "2149b85c56959d7cc9fe04d1ce74acdf5e9e313ff0eb11e6bab9ca79b3973900"
	I0110 08:21:45.069453   17984 cri.go:96] found id: "82bfb461b79f28c44b698878be57d4f3ed705a492181cc1e2b299581cc8e19c3"
	I0110 08:21:45.069457   17984 cri.go:96] found id: "800ef08a84703f203b5e10eeaa60cc96e4e0b501f7116ab326aec663a8d44a0a"
	I0110 08:21:45.069461   17984 cri.go:96] found id: "cc65f89692fad8f8f11138c2f42e6bc16e264609f7cf63fd4ef0448bcfbcd8a9"
	I0110 08:21:45.069464   17984 cri.go:96] found id: "c98446ea3c7dfb0a00df0216bc2d759bfb2abd9423c844e37dd879a9f1842ce2"
	I0110 08:21:45.069471   17984 cri.go:96] found id: "5f8174e666e70787ef47924deb832bd181b1187a4f93adde2719afd222583233"
	I0110 08:21:45.069474   17984 cri.go:96] found id: "273ea06f61975e2f11258602b200a4bc83fa37bbfdf217550f9a076abce1b87f"
	I0110 08:21:45.069476   17984 cri.go:96] found id: "5394108065d3cfd7430eab4005c30e45dbab412c8e88c64087470bc4bdf68d94"
	I0110 08:21:45.069480   17984 cri.go:96] found id: "b80c4916fca961561a4c4f5172bd77ec246690c99788e26501b0d53f2be91145"
	I0110 08:21:45.069483   17984 cri.go:96] found id: "f6fa6e8d4ac06e107cc11e058aaccfc1f58cc2d3bde427b80777917f1c523209"
	I0110 08:21:45.069485   17984 cri.go:96] found id: ""
	I0110 08:21:45.069541   17984 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 08:21:45.083456   17984 out.go:203] 
	W0110 08:21:45.085000   17984 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:21:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:21:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 08:21:45.085016   17984 out.go:285] * 
	* 
	W0110 08:21:45.085678   17984 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 08:21:45.087072   17984 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-910183 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.46s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5649ccbc87-cln6m" [6ec098ef-89f8-4120-a8b9-428c95f58ba9] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003246361s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-910183 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-910183 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (258.370216ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:21:52.939525   18707 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:21:52.939646   18707 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:21:52.939654   18707 out.go:374] Setting ErrFile to fd 2...
	I0110 08:21:52.939659   18707 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:21:52.939884   18707 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:21:52.940194   18707 mustload.go:66] Loading cluster: addons-910183
	I0110 08:21:52.940501   18707 config.go:182] Loaded profile config "addons-910183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:21:52.940519   18707 addons.go:622] checking whether the cluster is paused
	I0110 08:21:52.940616   18707 config.go:182] Loaded profile config "addons-910183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:21:52.940630   18707 host.go:66] Checking if "addons-910183" exists ...
	I0110 08:21:52.941021   18707 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Status}}
	I0110 08:21:52.962913   18707 ssh_runner.go:195] Run: systemctl --version
	I0110 08:21:52.962975   18707 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:21:52.983851   18707 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa Username:docker}
	I0110 08:21:53.078748   18707 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 08:21:53.078824   18707 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 08:21:53.117497   18707 cri.go:96] found id: "52e1025c4414838af6827ca4e9af267b60d7bab029698821e94b8c8c53b47a02"
	I0110 08:21:53.117517   18707 cri.go:96] found id: "75942fc7d47191bb325503f8137ff909131d1c4d52d50d6647a07624163824bf"
	I0110 08:21:53.117522   18707 cri.go:96] found id: "1f0e0366214d13effb5e1a2fdd2281ce1ab7895177c2879232a660b16f2f36f6"
	I0110 08:21:53.117525   18707 cri.go:96] found id: "ad41fe8eb72ea17b3787fd8b9148b6f74bcbae10876696c1004d2af9067e974a"
	I0110 08:21:53.117528   18707 cri.go:96] found id: "b8196c6709973cd145e81bee513c9295c75a07f339edd1849b8f09eeca3a8902"
	I0110 08:21:53.117531   18707 cri.go:96] found id: "e283b96a0148a542df9ef8145310402e66db65dde0eeb3d91a74bffd800c43b6"
	I0110 08:21:53.117534   18707 cri.go:96] found id: "a3befa7150ca579269272406feaeddcf602e63b5aa269f6a4eb71f0400cf9f65"
	I0110 08:21:53.117538   18707 cri.go:96] found id: "ba1fcf19b0afca5f042f84fda8060304660e32ffcb595b679580f3f1681e8b1b"
	I0110 08:21:53.117543   18707 cri.go:96] found id: "e315fa8c4e521b6643d8334e448d62113f1d082439e58e1761e61d37e8c0adab"
	I0110 08:21:53.117550   18707 cri.go:96] found id: "60d2da1df937e39c979adac7502a680da7554008aebe24a90289fa081d48066c"
	I0110 08:21:53.117554   18707 cri.go:96] found id: "a26f210429ce8f81957c43d13ae7dc0c7795a1237b391a18c23dc21ee3dac83a"
	I0110 08:21:53.117559   18707 cri.go:96] found id: "ac0bb3017c2df5407dc70b2065a8642f17f4632f0eb519eae225e58db88cb7d7"
	I0110 08:21:53.117564   18707 cri.go:96] found id: "80376415781ced2a18f3b49817b93eb4bbe7ad36f4493427b2dcf4382ff8817b"
	I0110 08:21:53.117584   18707 cri.go:96] found id: "f180f3ed8bb6409a8995534d80f2ad8dbca483a62f67c39d69e4883e5841f7b7"
	I0110 08:21:53.117594   18707 cri.go:96] found id: "2149b85c56959d7cc9fe04d1ce74acdf5e9e313ff0eb11e6bab9ca79b3973900"
	I0110 08:21:53.117600   18707 cri.go:96] found id: "82bfb461b79f28c44b698878be57d4f3ed705a492181cc1e2b299581cc8e19c3"
	I0110 08:21:53.117605   18707 cri.go:96] found id: "800ef08a84703f203b5e10eeaa60cc96e4e0b501f7116ab326aec663a8d44a0a"
	I0110 08:21:53.117610   18707 cri.go:96] found id: "cc65f89692fad8f8f11138c2f42e6bc16e264609f7cf63fd4ef0448bcfbcd8a9"
	I0110 08:21:53.117613   18707 cri.go:96] found id: "c98446ea3c7dfb0a00df0216bc2d759bfb2abd9423c844e37dd879a9f1842ce2"
	I0110 08:21:53.117616   18707 cri.go:96] found id: "5f8174e666e70787ef47924deb832bd181b1187a4f93adde2719afd222583233"
	I0110 08:21:53.117621   18707 cri.go:96] found id: "273ea06f61975e2f11258602b200a4bc83fa37bbfdf217550f9a076abce1b87f"
	I0110 08:21:53.117627   18707 cri.go:96] found id: "5394108065d3cfd7430eab4005c30e45dbab412c8e88c64087470bc4bdf68d94"
	I0110 08:21:53.117632   18707 cri.go:96] found id: "b80c4916fca961561a4c4f5172bd77ec246690c99788e26501b0d53f2be91145"
	I0110 08:21:53.117636   18707 cri.go:96] found id: "f6fa6e8d4ac06e107cc11e058aaccfc1f58cc2d3bde427b80777917f1c523209"
	I0110 08:21:53.117644   18707 cri.go:96] found id: ""
	I0110 08:21:53.117702   18707 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 08:21:53.133100   18707 out.go:203] 
	W0110 08:21:53.134499   18707 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:21:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:21:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 08:21:53.134518   18707 out.go:285] * 
	* 
	W0110 08:21:53.135211   18707 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 08:21:53.136646   18707 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-910183 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.27s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.09s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-910183 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-910183 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910183 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910183 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910183 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910183 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-910183 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [37fcfe12-1306-41a6-99da-cabd201db8d1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [37fcfe12-1306-41a6-99da-cabd201db8d1] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [37fcfe12-1306-41a6-99da-cabd201db8d1] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.002670017s
addons_test.go:969: (dbg) Run:  kubectl --context addons-910183 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-910183 ssh "cat /opt/local-path-provisioner/pvc-590d5dcf-a626-4841-8d90-5e98965be321_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-910183 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-910183 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-910183 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-910183 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (256.722847ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:21:52.987137   18719 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:21:52.987494   18719 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:21:52.987520   18719 out.go:374] Setting ErrFile to fd 2...
	I0110 08:21:52.987531   18719 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:21:52.987835   18719 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:21:52.988288   18719 mustload.go:66] Loading cluster: addons-910183
	I0110 08:21:52.988615   18719 config.go:182] Loaded profile config "addons-910183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:21:52.988635   18719 addons.go:622] checking whether the cluster is paused
	I0110 08:21:52.988743   18719 config.go:182] Loaded profile config "addons-910183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:21:52.988760   18719 host.go:66] Checking if "addons-910183" exists ...
	I0110 08:21:52.989132   18719 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Status}}
	I0110 08:21:53.007657   18719 ssh_runner.go:195] Run: systemctl --version
	I0110 08:21:53.007711   18719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:21:53.028132   18719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa Username:docker}
	I0110 08:21:53.121977   18719 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 08:21:53.122036   18719 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 08:21:53.155815   18719 cri.go:96] found id: "52e1025c4414838af6827ca4e9af267b60d7bab029698821e94b8c8c53b47a02"
	I0110 08:21:53.155856   18719 cri.go:96] found id: "75942fc7d47191bb325503f8137ff909131d1c4d52d50d6647a07624163824bf"
	I0110 08:21:53.155865   18719 cri.go:96] found id: "1f0e0366214d13effb5e1a2fdd2281ce1ab7895177c2879232a660b16f2f36f6"
	I0110 08:21:53.155870   18719 cri.go:96] found id: "ad41fe8eb72ea17b3787fd8b9148b6f74bcbae10876696c1004d2af9067e974a"
	I0110 08:21:53.155875   18719 cri.go:96] found id: "b8196c6709973cd145e81bee513c9295c75a07f339edd1849b8f09eeca3a8902"
	I0110 08:21:53.155882   18719 cri.go:96] found id: "e283b96a0148a542df9ef8145310402e66db65dde0eeb3d91a74bffd800c43b6"
	I0110 08:21:53.155887   18719 cri.go:96] found id: "a3befa7150ca579269272406feaeddcf602e63b5aa269f6a4eb71f0400cf9f65"
	I0110 08:21:53.155891   18719 cri.go:96] found id: "ba1fcf19b0afca5f042f84fda8060304660e32ffcb595b679580f3f1681e8b1b"
	I0110 08:21:53.155895   18719 cri.go:96] found id: "e315fa8c4e521b6643d8334e448d62113f1d082439e58e1761e61d37e8c0adab"
	I0110 08:21:53.155909   18719 cri.go:96] found id: "60d2da1df937e39c979adac7502a680da7554008aebe24a90289fa081d48066c"
	I0110 08:21:53.155914   18719 cri.go:96] found id: "a26f210429ce8f81957c43d13ae7dc0c7795a1237b391a18c23dc21ee3dac83a"
	I0110 08:21:53.155919   18719 cri.go:96] found id: "ac0bb3017c2df5407dc70b2065a8642f17f4632f0eb519eae225e58db88cb7d7"
	I0110 08:21:53.155924   18719 cri.go:96] found id: "80376415781ced2a18f3b49817b93eb4bbe7ad36f4493427b2dcf4382ff8817b"
	I0110 08:21:53.155928   18719 cri.go:96] found id: "f180f3ed8bb6409a8995534d80f2ad8dbca483a62f67c39d69e4883e5841f7b7"
	I0110 08:21:53.155933   18719 cri.go:96] found id: "2149b85c56959d7cc9fe04d1ce74acdf5e9e313ff0eb11e6bab9ca79b3973900"
	I0110 08:21:53.155950   18719 cri.go:96] found id: "82bfb461b79f28c44b698878be57d4f3ed705a492181cc1e2b299581cc8e19c3"
	I0110 08:21:53.155955   18719 cri.go:96] found id: "800ef08a84703f203b5e10eeaa60cc96e4e0b501f7116ab326aec663a8d44a0a"
	I0110 08:21:53.155961   18719 cri.go:96] found id: "cc65f89692fad8f8f11138c2f42e6bc16e264609f7cf63fd4ef0448bcfbcd8a9"
	I0110 08:21:53.155965   18719 cri.go:96] found id: "c98446ea3c7dfb0a00df0216bc2d759bfb2abd9423c844e37dd879a9f1842ce2"
	I0110 08:21:53.155969   18719 cri.go:96] found id: "5f8174e666e70787ef47924deb832bd181b1187a4f93adde2719afd222583233"
	I0110 08:21:53.155974   18719 cri.go:96] found id: "273ea06f61975e2f11258602b200a4bc83fa37bbfdf217550f9a076abce1b87f"
	I0110 08:21:53.155978   18719 cri.go:96] found id: "5394108065d3cfd7430eab4005c30e45dbab412c8e88c64087470bc4bdf68d94"
	I0110 08:21:53.155982   18719 cri.go:96] found id: "b80c4916fca961561a4c4f5172bd77ec246690c99788e26501b0d53f2be91145"
	I0110 08:21:53.155987   18719 cri.go:96] found id: "f6fa6e8d4ac06e107cc11e058aaccfc1f58cc2d3bde427b80777917f1c523209"
	I0110 08:21:53.155990   18719 cri.go:96] found id: ""
	I0110 08:21:53.156052   18719 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 08:21:53.172204   18719 out.go:203] 
	W0110 08:21:53.173595   18719 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:21:53Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:21:53Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 08:21:53.173623   18719 out.go:285] * 
	* 
	W0110 08:21:53.174512   18719 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 08:21:53.175961   18719 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-910183 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.09s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-cr698" [35c26c44-934c-4ac5-af77-fe7d9272e2d9] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.002567238s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-910183 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-910183 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (242.645569ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:21:47.689342   18184 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:21:47.689539   18184 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:21:47.689552   18184 out.go:374] Setting ErrFile to fd 2...
	I0110 08:21:47.689558   18184 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:21:47.690204   18184 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:21:47.690709   18184 mustload.go:66] Loading cluster: addons-910183
	I0110 08:21:47.691797   18184 config.go:182] Loaded profile config "addons-910183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:21:47.691830   18184 addons.go:622] checking whether the cluster is paused
	I0110 08:21:47.691976   18184 config.go:182] Loaded profile config "addons-910183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:21:47.691992   18184 host.go:66] Checking if "addons-910183" exists ...
	I0110 08:21:47.692586   18184 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Status}}
	I0110 08:21:47.710992   18184 ssh_runner.go:195] Run: systemctl --version
	I0110 08:21:47.711049   18184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:21:47.731356   18184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa Username:docker}
	I0110 08:21:47.824915   18184 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 08:21:47.824987   18184 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 08:21:47.853021   18184 cri.go:96] found id: "52e1025c4414838af6827ca4e9af267b60d7bab029698821e94b8c8c53b47a02"
	I0110 08:21:47.853045   18184 cri.go:96] found id: "75942fc7d47191bb325503f8137ff909131d1c4d52d50d6647a07624163824bf"
	I0110 08:21:47.853051   18184 cri.go:96] found id: "1f0e0366214d13effb5e1a2fdd2281ce1ab7895177c2879232a660b16f2f36f6"
	I0110 08:21:47.853056   18184 cri.go:96] found id: "ad41fe8eb72ea17b3787fd8b9148b6f74bcbae10876696c1004d2af9067e974a"
	I0110 08:21:47.853061   18184 cri.go:96] found id: "b8196c6709973cd145e81bee513c9295c75a07f339edd1849b8f09eeca3a8902"
	I0110 08:21:47.853066   18184 cri.go:96] found id: "e283b96a0148a542df9ef8145310402e66db65dde0eeb3d91a74bffd800c43b6"
	I0110 08:21:47.853070   18184 cri.go:96] found id: "a3befa7150ca579269272406feaeddcf602e63b5aa269f6a4eb71f0400cf9f65"
	I0110 08:21:47.853073   18184 cri.go:96] found id: "ba1fcf19b0afca5f042f84fda8060304660e32ffcb595b679580f3f1681e8b1b"
	I0110 08:21:47.853076   18184 cri.go:96] found id: "e315fa8c4e521b6643d8334e448d62113f1d082439e58e1761e61d37e8c0adab"
	I0110 08:21:47.853083   18184 cri.go:96] found id: "60d2da1df937e39c979adac7502a680da7554008aebe24a90289fa081d48066c"
	I0110 08:21:47.853091   18184 cri.go:96] found id: "a26f210429ce8f81957c43d13ae7dc0c7795a1237b391a18c23dc21ee3dac83a"
	I0110 08:21:47.853096   18184 cri.go:96] found id: "ac0bb3017c2df5407dc70b2065a8642f17f4632f0eb519eae225e58db88cb7d7"
	I0110 08:21:47.853105   18184 cri.go:96] found id: "80376415781ced2a18f3b49817b93eb4bbe7ad36f4493427b2dcf4382ff8817b"
	I0110 08:21:47.853110   18184 cri.go:96] found id: "f180f3ed8bb6409a8995534d80f2ad8dbca483a62f67c39d69e4883e5841f7b7"
	I0110 08:21:47.853117   18184 cri.go:96] found id: "2149b85c56959d7cc9fe04d1ce74acdf5e9e313ff0eb11e6bab9ca79b3973900"
	I0110 08:21:47.853130   18184 cri.go:96] found id: "82bfb461b79f28c44b698878be57d4f3ed705a492181cc1e2b299581cc8e19c3"
	I0110 08:21:47.853135   18184 cri.go:96] found id: "800ef08a84703f203b5e10eeaa60cc96e4e0b501f7116ab326aec663a8d44a0a"
	I0110 08:21:47.853141   18184 cri.go:96] found id: "cc65f89692fad8f8f11138c2f42e6bc16e264609f7cf63fd4ef0448bcfbcd8a9"
	I0110 08:21:47.853148   18184 cri.go:96] found id: "c98446ea3c7dfb0a00df0216bc2d759bfb2abd9423c844e37dd879a9f1842ce2"
	I0110 08:21:47.853153   18184 cri.go:96] found id: "5f8174e666e70787ef47924deb832bd181b1187a4f93adde2719afd222583233"
	I0110 08:21:47.853166   18184 cri.go:96] found id: "273ea06f61975e2f11258602b200a4bc83fa37bbfdf217550f9a076abce1b87f"
	I0110 08:21:47.853177   18184 cri.go:96] found id: "5394108065d3cfd7430eab4005c30e45dbab412c8e88c64087470bc4bdf68d94"
	I0110 08:21:47.853180   18184 cri.go:96] found id: "b80c4916fca961561a4c4f5172bd77ec246690c99788e26501b0d53f2be91145"
	I0110 08:21:47.853183   18184 cri.go:96] found id: "f6fa6e8d4ac06e107cc11e058aaccfc1f58cc2d3bde427b80777917f1c523209"
	I0110 08:21:47.853185   18184 cri.go:96] found id: ""
	I0110 08:21:47.853231   18184 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 08:21:47.867501   18184 out.go:203] 
	W0110 08:21:47.868628   18184 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:21:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:21:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 08:21:47.868647   18184 out.go:285] * 
	* 
	W0110 08:21:47.869327   18184 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 08:21:47.870499   18184 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-910183 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.25s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-7bcf5795cd-g7g6p" [5c4431a4-55bc-4619-9b70-949de3068727] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003782812s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-910183 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-910183 addons disable yakd --alsologtostderr -v=1: exit status 11 (235.562982ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:22:04.480332   21095 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:22:04.480455   21095 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:22:04.480463   21095 out.go:374] Setting ErrFile to fd 2...
	I0110 08:22:04.480468   21095 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:22:04.480683   21095 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:22:04.480969   21095 mustload.go:66] Loading cluster: addons-910183
	I0110 08:22:04.481281   21095 config.go:182] Loaded profile config "addons-910183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:22:04.481295   21095 addons.go:622] checking whether the cluster is paused
	I0110 08:22:04.481373   21095 config.go:182] Loaded profile config "addons-910183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:22:04.481384   21095 host.go:66] Checking if "addons-910183" exists ...
	I0110 08:22:04.481717   21095 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Status}}
	I0110 08:22:04.499531   21095 ssh_runner.go:195] Run: systemctl --version
	I0110 08:22:04.499592   21095 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:22:04.517933   21095 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa Username:docker}
	I0110 08:22:04.609925   21095 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 08:22:04.610022   21095 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 08:22:04.640476   21095 cri.go:96] found id: "aebe01af9fa8d6d47cef32b601f05c075ceb41b127c57b221c0216042caeb945"
	I0110 08:22:04.640499   21095 cri.go:96] found id: "52e1025c4414838af6827ca4e9af267b60d7bab029698821e94b8c8c53b47a02"
	I0110 08:22:04.640503   21095 cri.go:96] found id: "75942fc7d47191bb325503f8137ff909131d1c4d52d50d6647a07624163824bf"
	I0110 08:22:04.640507   21095 cri.go:96] found id: "1f0e0366214d13effb5e1a2fdd2281ce1ab7895177c2879232a660b16f2f36f6"
	I0110 08:22:04.640510   21095 cri.go:96] found id: "ad41fe8eb72ea17b3787fd8b9148b6f74bcbae10876696c1004d2af9067e974a"
	I0110 08:22:04.640513   21095 cri.go:96] found id: "b8196c6709973cd145e81bee513c9295c75a07f339edd1849b8f09eeca3a8902"
	I0110 08:22:04.640515   21095 cri.go:96] found id: "e283b96a0148a542df9ef8145310402e66db65dde0eeb3d91a74bffd800c43b6"
	I0110 08:22:04.640518   21095 cri.go:96] found id: "a3befa7150ca579269272406feaeddcf602e63b5aa269f6a4eb71f0400cf9f65"
	I0110 08:22:04.640520   21095 cri.go:96] found id: "ba1fcf19b0afca5f042f84fda8060304660e32ffcb595b679580f3f1681e8b1b"
	I0110 08:22:04.640526   21095 cri.go:96] found id: "e315fa8c4e521b6643d8334e448d62113f1d082439e58e1761e61d37e8c0adab"
	I0110 08:22:04.640529   21095 cri.go:96] found id: "60d2da1df937e39c979adac7502a680da7554008aebe24a90289fa081d48066c"
	I0110 08:22:04.640548   21095 cri.go:96] found id: "a26f210429ce8f81957c43d13ae7dc0c7795a1237b391a18c23dc21ee3dac83a"
	I0110 08:22:04.640558   21095 cri.go:96] found id: "ac0bb3017c2df5407dc70b2065a8642f17f4632f0eb519eae225e58db88cb7d7"
	I0110 08:22:04.640564   21095 cri.go:96] found id: "80376415781ced2a18f3b49817b93eb4bbe7ad36f4493427b2dcf4382ff8817b"
	I0110 08:22:04.640574   21095 cri.go:96] found id: "f180f3ed8bb6409a8995534d80f2ad8dbca483a62f67c39d69e4883e5841f7b7"
	I0110 08:22:04.640584   21095 cri.go:96] found id: "2149b85c56959d7cc9fe04d1ce74acdf5e9e313ff0eb11e6bab9ca79b3973900"
	I0110 08:22:04.640587   21095 cri.go:96] found id: "82bfb461b79f28c44b698878be57d4f3ed705a492181cc1e2b299581cc8e19c3"
	I0110 08:22:04.640590   21095 cri.go:96] found id: "800ef08a84703f203b5e10eeaa60cc96e4e0b501f7116ab326aec663a8d44a0a"
	I0110 08:22:04.640593   21095 cri.go:96] found id: "cc65f89692fad8f8f11138c2f42e6bc16e264609f7cf63fd4ef0448bcfbcd8a9"
	I0110 08:22:04.640596   21095 cri.go:96] found id: "c98446ea3c7dfb0a00df0216bc2d759bfb2abd9423c844e37dd879a9f1842ce2"
	I0110 08:22:04.640602   21095 cri.go:96] found id: "5f8174e666e70787ef47924deb832bd181b1187a4f93adde2719afd222583233"
	I0110 08:22:04.640605   21095 cri.go:96] found id: "273ea06f61975e2f11258602b200a4bc83fa37bbfdf217550f9a076abce1b87f"
	I0110 08:22:04.640608   21095 cri.go:96] found id: "5394108065d3cfd7430eab4005c30e45dbab412c8e88c64087470bc4bdf68d94"
	I0110 08:22:04.640611   21095 cri.go:96] found id: "b80c4916fca961561a4c4f5172bd77ec246690c99788e26501b0d53f2be91145"
	I0110 08:22:04.640614   21095 cri.go:96] found id: "f6fa6e8d4ac06e107cc11e058aaccfc1f58cc2d3bde427b80777917f1c523209"
	I0110 08:22:04.640616   21095 cri.go:96] found id: ""
	I0110 08:22:04.640663   21095 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 08:22:04.655533   21095 out.go:203] 
	W0110 08:22:04.656879   21095 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:22:04Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:22:04Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 08:22:04.656916   21095 out.go:285] * 
	* 
	W0110 08:22:04.657606   21095 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 08:22:04.658855   21095 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-910183 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.24s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:353: "amd-gpu-device-plugin-8zrhw" [3e3fd217-ef7c-4343-8d3f-a9f9e21b8dcc] Running
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003477009s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-910183 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-910183 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (248.286623ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:22:01.858665   20321 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:22:01.858868   20321 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:22:01.858881   20321 out.go:374] Setting ErrFile to fd 2...
	I0110 08:22:01.858888   20321 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:22:01.859126   20321 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:22:01.859435   20321 mustload.go:66] Loading cluster: addons-910183
	I0110 08:22:01.859849   20321 config.go:182] Loaded profile config "addons-910183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:22:01.859870   20321 addons.go:622] checking whether the cluster is paused
	I0110 08:22:01.859981   20321 config.go:182] Loaded profile config "addons-910183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:22:01.859998   20321 host.go:66] Checking if "addons-910183" exists ...
	I0110 08:22:01.860392   20321 cli_runner.go:164] Run: docker container inspect addons-910183 --format={{.State.Status}}
	I0110 08:22:01.879584   20321 ssh_runner.go:195] Run: systemctl --version
	I0110 08:22:01.879644   20321 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-910183
	I0110 08:22:01.899458   20321 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/addons-910183/id_rsa Username:docker}
	I0110 08:22:01.990929   20321 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 08:22:01.991000   20321 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 08:22:02.025142   20321 cri.go:96] found id: "aebe01af9fa8d6d47cef32b601f05c075ceb41b127c57b221c0216042caeb945"
	I0110 08:22:02.025182   20321 cri.go:96] found id: "52e1025c4414838af6827ca4e9af267b60d7bab029698821e94b8c8c53b47a02"
	I0110 08:22:02.025189   20321 cri.go:96] found id: "75942fc7d47191bb325503f8137ff909131d1c4d52d50d6647a07624163824bf"
	I0110 08:22:02.025201   20321 cri.go:96] found id: "1f0e0366214d13effb5e1a2fdd2281ce1ab7895177c2879232a660b16f2f36f6"
	I0110 08:22:02.025206   20321 cri.go:96] found id: "ad41fe8eb72ea17b3787fd8b9148b6f74bcbae10876696c1004d2af9067e974a"
	I0110 08:22:02.025217   20321 cri.go:96] found id: "b8196c6709973cd145e81bee513c9295c75a07f339edd1849b8f09eeca3a8902"
	I0110 08:22:02.025222   20321 cri.go:96] found id: "e283b96a0148a542df9ef8145310402e66db65dde0eeb3d91a74bffd800c43b6"
	I0110 08:22:02.025226   20321 cri.go:96] found id: "a3befa7150ca579269272406feaeddcf602e63b5aa269f6a4eb71f0400cf9f65"
	I0110 08:22:02.025230   20321 cri.go:96] found id: "ba1fcf19b0afca5f042f84fda8060304660e32ffcb595b679580f3f1681e8b1b"
	I0110 08:22:02.025247   20321 cri.go:96] found id: "e315fa8c4e521b6643d8334e448d62113f1d082439e58e1761e61d37e8c0adab"
	I0110 08:22:02.025258   20321 cri.go:96] found id: "60d2da1df937e39c979adac7502a680da7554008aebe24a90289fa081d48066c"
	I0110 08:22:02.025262   20321 cri.go:96] found id: "a26f210429ce8f81957c43d13ae7dc0c7795a1237b391a18c23dc21ee3dac83a"
	I0110 08:22:02.025267   20321 cri.go:96] found id: "ac0bb3017c2df5407dc70b2065a8642f17f4632f0eb519eae225e58db88cb7d7"
	I0110 08:22:02.025278   20321 cri.go:96] found id: "80376415781ced2a18f3b49817b93eb4bbe7ad36f4493427b2dcf4382ff8817b"
	I0110 08:22:02.025283   20321 cri.go:96] found id: "f180f3ed8bb6409a8995534d80f2ad8dbca483a62f67c39d69e4883e5841f7b7"
	I0110 08:22:02.025301   20321 cri.go:96] found id: "2149b85c56959d7cc9fe04d1ce74acdf5e9e313ff0eb11e6bab9ca79b3973900"
	I0110 08:22:02.025305   20321 cri.go:96] found id: "82bfb461b79f28c44b698878be57d4f3ed705a492181cc1e2b299581cc8e19c3"
	I0110 08:22:02.025311   20321 cri.go:96] found id: "800ef08a84703f203b5e10eeaa60cc96e4e0b501f7116ab326aec663a8d44a0a"
	I0110 08:22:02.025315   20321 cri.go:96] found id: "cc65f89692fad8f8f11138c2f42e6bc16e264609f7cf63fd4ef0448bcfbcd8a9"
	I0110 08:22:02.025319   20321 cri.go:96] found id: "c98446ea3c7dfb0a00df0216bc2d759bfb2abd9423c844e37dd879a9f1842ce2"
	I0110 08:22:02.025326   20321 cri.go:96] found id: "5f8174e666e70787ef47924deb832bd181b1187a4f93adde2719afd222583233"
	I0110 08:22:02.025330   20321 cri.go:96] found id: "273ea06f61975e2f11258602b200a4bc83fa37bbfdf217550f9a076abce1b87f"
	I0110 08:22:02.025334   20321 cri.go:96] found id: "5394108065d3cfd7430eab4005c30e45dbab412c8e88c64087470bc4bdf68d94"
	I0110 08:22:02.025338   20321 cri.go:96] found id: "b80c4916fca961561a4c4f5172bd77ec246690c99788e26501b0d53f2be91145"
	I0110 08:22:02.025342   20321 cri.go:96] found id: "f6fa6e8d4ac06e107cc11e058aaccfc1f58cc2d3bde427b80777917f1c523209"
	I0110 08:22:02.025346   20321 cri.go:96] found id: ""
	I0110 08:22:02.025413   20321 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 08:22:02.041385   20321 out.go:203] 
	W0110 08:22:02.042718   20321 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:22:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:22:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 08:22:02.042828   20321 out.go:285] * 
	* 
	W0110 08:22:02.043572   20321 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 08:22:02.044818   20321 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-910183 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.25s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.26s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-308285 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-308285 --output=json --user=testUser: exit status 80 (2.258090147s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"62b1ff65-c058-4d08-928b-54a7aef3fc48","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-308285 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"938ee23f-b92e-4ed5-992a-5d649e01bc37","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2026-01-10T08:34:26Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"5520859c-c1f1-4043-a346-bfd479412eb1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-308285 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.26s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.09s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-308285 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-308285 --output=json --user=testUser: exit status 80 (2.090513148s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"26a21681-ab56-485d-a0f8-fc19257782c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-308285 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"ed0f6798-10ce-4799-8d38-6faa7919740a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2026-01-10T08:34:28Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"42eb548e-ecc2-474a-99b4-2b491f494749","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-308285 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.09s)

                                                
                                    
x
+
TestPause/serial/Pause (5.2s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-678123 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-678123 --alsologtostderr -v=5: exit status 80 (1.635621505s)

                                                
                                                
-- stdout --
	* Pausing node pause-678123 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:45:00.496254  179373 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:45:00.496491  179373 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:45:00.496499  179373 out.go:374] Setting ErrFile to fd 2...
	I0110 08:45:00.496503  179373 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:45:00.496680  179373 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:45:00.496933  179373 out.go:368] Setting JSON to false
	I0110 08:45:00.496953  179373 mustload.go:66] Loading cluster: pause-678123
	I0110 08:45:00.497316  179373 config.go:182] Loaded profile config "pause-678123": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:45:00.497660  179373 cli_runner.go:164] Run: docker container inspect pause-678123 --format={{.State.Status}}
	I0110 08:45:00.516053  179373 host.go:66] Checking if "pause-678123" exists ...
	I0110 08:45:00.516311  179373 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:45:00.573611  179373 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:57 OomKillDisable:false NGoroutines:72 SystemTime:2026-01-10 08:45:00.561586483 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:45:00.574240  179373 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22376/minikube-v1.37.0-1767438792-22376-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1767438792-22376/minikube-v1.37.0-1767438792-22376-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1767438792-22376-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:pause-678123 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true
) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0110 08:45:00.625539  179373 out.go:179] * Pausing node pause-678123 ... 
	I0110 08:45:00.651945  179373 host.go:66] Checking if "pause-678123" exists ...
	I0110 08:45:00.652399  179373 ssh_runner.go:195] Run: systemctl --version
	I0110 08:45:00.652451  179373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-678123
	I0110 08:45:00.670385  179373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32963 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/pause-678123/id_rsa Username:docker}
	I0110 08:45:00.763649  179373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:45:00.776554  179373 pause.go:52] kubelet running: true
	I0110 08:45:00.776630  179373 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 08:45:00.898521  179373 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 08:45:00.898644  179373 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 08:45:00.965246  179373 cri.go:96] found id: "f3ea8d5598a5ea9d8e3a390c18f790444ae5cb11b7c0226f0b144b9d05c83a04"
	I0110 08:45:00.965278  179373 cri.go:96] found id: "5caf2e27402c6bf30d53a931012ff5849fde4125867d91d0913fce93b68c0021"
	I0110 08:45:00.965285  179373 cri.go:96] found id: "0ec28ff9a5a230ef38b5b4a4e2fb64fdcf59fa83b33592705b9c7c3586711f1c"
	I0110 08:45:00.965290  179373 cri.go:96] found id: "1206eca28b9b971a40e042d2cbfbee5210ee9fec259b792781c02e55620e92db"
	I0110 08:45:00.965295  179373 cri.go:96] found id: "855477d18a3a24c8ce5384ee94e9fbbf34b25e8c7221d8828b4fbb3bbb98e8b5"
	I0110 08:45:00.965299  179373 cri.go:96] found id: "2b24ef461e6b2db4c3746849bf15b7a5bb7ede8d1ad7b23c1d938ec8e945e86d"
	I0110 08:45:00.965302  179373 cri.go:96] found id: "f36022cb1f6ebf0fb589be28e6c4a599c94aa3ed350eedaa9ae426ea44cf5016"
	I0110 08:45:00.965304  179373 cri.go:96] found id: ""
	I0110 08:45:00.965341  179373 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 08:45:00.977625  179373 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:45:00Z" level=error msg="open /run/runc: no such file or directory"
	I0110 08:45:01.306208  179373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:45:01.319064  179373 pause.go:52] kubelet running: false
	I0110 08:45:01.319128  179373 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 08:45:01.426877  179373 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 08:45:01.426988  179373 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 08:45:01.503324  179373 cri.go:96] found id: "f3ea8d5598a5ea9d8e3a390c18f790444ae5cb11b7c0226f0b144b9d05c83a04"
	I0110 08:45:01.503350  179373 cri.go:96] found id: "5caf2e27402c6bf30d53a931012ff5849fde4125867d91d0913fce93b68c0021"
	I0110 08:45:01.503356  179373 cri.go:96] found id: "0ec28ff9a5a230ef38b5b4a4e2fb64fdcf59fa83b33592705b9c7c3586711f1c"
	I0110 08:45:01.503361  179373 cri.go:96] found id: "1206eca28b9b971a40e042d2cbfbee5210ee9fec259b792781c02e55620e92db"
	I0110 08:45:01.503366  179373 cri.go:96] found id: "855477d18a3a24c8ce5384ee94e9fbbf34b25e8c7221d8828b4fbb3bbb98e8b5"
	I0110 08:45:01.503370  179373 cri.go:96] found id: "2b24ef461e6b2db4c3746849bf15b7a5bb7ede8d1ad7b23c1d938ec8e945e86d"
	I0110 08:45:01.503373  179373 cri.go:96] found id: "f36022cb1f6ebf0fb589be28e6c4a599c94aa3ed350eedaa9ae426ea44cf5016"
	I0110 08:45:01.503376  179373 cri.go:96] found id: ""
	I0110 08:45:01.503444  179373 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 08:45:01.865251  179373 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:45:01.878434  179373 pause.go:52] kubelet running: false
	I0110 08:45:01.878514  179373 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 08:45:01.987068  179373 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 08:45:01.987165  179373 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 08:45:02.053255  179373 cri.go:96] found id: "f3ea8d5598a5ea9d8e3a390c18f790444ae5cb11b7c0226f0b144b9d05c83a04"
	I0110 08:45:02.053276  179373 cri.go:96] found id: "5caf2e27402c6bf30d53a931012ff5849fde4125867d91d0913fce93b68c0021"
	I0110 08:45:02.053280  179373 cri.go:96] found id: "0ec28ff9a5a230ef38b5b4a4e2fb64fdcf59fa83b33592705b9c7c3586711f1c"
	I0110 08:45:02.053283  179373 cri.go:96] found id: "1206eca28b9b971a40e042d2cbfbee5210ee9fec259b792781c02e55620e92db"
	I0110 08:45:02.053286  179373 cri.go:96] found id: "855477d18a3a24c8ce5384ee94e9fbbf34b25e8c7221d8828b4fbb3bbb98e8b5"
	I0110 08:45:02.053289  179373 cri.go:96] found id: "2b24ef461e6b2db4c3746849bf15b7a5bb7ede8d1ad7b23c1d938ec8e945e86d"
	I0110 08:45:02.053291  179373 cri.go:96] found id: "f36022cb1f6ebf0fb589be28e6c4a599c94aa3ed350eedaa9ae426ea44cf5016"
	I0110 08:45:02.053294  179373 cri.go:96] found id: ""
	I0110 08:45:02.053337  179373 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 08:45:02.068335  179373 out.go:203] 
	W0110 08:45:02.069854  179373 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:45:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:45:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 08:45:02.069876  179373 out.go:285] * 
	* 
	W0110 08:45:02.071867  179373 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 08:45:02.073378  179373 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-678123 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-678123
helpers_test.go:244: (dbg) docker inspect pause-678123:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9922b73863bb2644965cd2d7ef6181d70adcd4bed50556be8d16f27f9127811c",
	        "Created": "2026-01-10T08:44:11.26925534Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 165414,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T08:44:12.270281996Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e8ee619afcf8e9008c4fe3e7f2aba5fdbe7a9f0765053ca7dd53ab0df8ab02a5",
	        "ResolvConfPath": "/var/lib/docker/containers/9922b73863bb2644965cd2d7ef6181d70adcd4bed50556be8d16f27f9127811c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9922b73863bb2644965cd2d7ef6181d70adcd4bed50556be8d16f27f9127811c/hostname",
	        "HostsPath": "/var/lib/docker/containers/9922b73863bb2644965cd2d7ef6181d70adcd4bed50556be8d16f27f9127811c/hosts",
	        "LogPath": "/var/lib/docker/containers/9922b73863bb2644965cd2d7ef6181d70adcd4bed50556be8d16f27f9127811c/9922b73863bb2644965cd2d7ef6181d70adcd4bed50556be8d16f27f9127811c-json.log",
	        "Name": "/pause-678123",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-678123:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-678123",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9922b73863bb2644965cd2d7ef6181d70adcd4bed50556be8d16f27f9127811c",
	                "LowerDir": "/var/lib/docker/overlay2/158928b75aaae99bf4f98241df18202fbafee5b7159141ac27ee3f8dc5ab8003-init/diff:/var/lib/docker/overlay2/97b14eb520192356c991915ce74cd6094e6ef948f398ac688b667ea18491ccde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/158928b75aaae99bf4f98241df18202fbafee5b7159141ac27ee3f8dc5ab8003/merged",
	                "UpperDir": "/var/lib/docker/overlay2/158928b75aaae99bf4f98241df18202fbafee5b7159141ac27ee3f8dc5ab8003/diff",
	                "WorkDir": "/var/lib/docker/overlay2/158928b75aaae99bf4f98241df18202fbafee5b7159141ac27ee3f8dc5ab8003/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-678123",
	                "Source": "/var/lib/docker/volumes/pause-678123/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-678123",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-678123",
	                "name.minikube.sigs.k8s.io": "pause-678123",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "04bb3391602ae19527e87136593851b84ff9c027def654bd3422b22e5d94d675",
	            "SandboxKey": "/var/run/docker/netns/04bb3391602a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32963"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32964"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32967"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32965"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32966"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-678123": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "da73e98c01a21ed7c72204225a3074f49f65f3ce3cdc15a9a317580f4a9c6957",
	                    "EndpointID": "50251e56b1ad1ffdca35895803bc227179984d64c9c4fedd2f9a05386c6d3846",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "62:ac:49:97:30:08",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-678123",
	                        "9922b73863bb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-678123 -n pause-678123
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-678123 -n pause-678123: exit status 2 (361.568053ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-678123 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-678123 logs -n 25: (1.028428549s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p scheduled-stop-701534 --memory=3072 --driver=docker  --container-runtime=crio                                                         │ scheduled-stop-701534       │ jenkins │ v1.37.0 │ 10 Jan 26 08:42 UTC │ 10 Jan 26 08:42 UTC │
	│ stop    │ -p scheduled-stop-701534 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-701534       │ jenkins │ v1.37.0 │ 10 Jan 26 08:42 UTC │                     │
	│ stop    │ -p scheduled-stop-701534 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-701534       │ jenkins │ v1.37.0 │ 10 Jan 26 08:42 UTC │                     │
	│ stop    │ -p scheduled-stop-701534 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-701534       │ jenkins │ v1.37.0 │ 10 Jan 26 08:42 UTC │                     │
	│ stop    │ -p scheduled-stop-701534 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-701534       │ jenkins │ v1.37.0 │ 10 Jan 26 08:42 UTC │                     │
	│ stop    │ -p scheduled-stop-701534 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-701534       │ jenkins │ v1.37.0 │ 10 Jan 26 08:42 UTC │                     │
	│ stop    │ -p scheduled-stop-701534 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-701534       │ jenkins │ v1.37.0 │ 10 Jan 26 08:42 UTC │                     │
	│ stop    │ -p scheduled-stop-701534 --cancel-scheduled                                                                                              │ scheduled-stop-701534       │ jenkins │ v1.37.0 │ 10 Jan 26 08:42 UTC │ 10 Jan 26 08:42 UTC │
	│ stop    │ -p scheduled-stop-701534 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-701534       │ jenkins │ v1.37.0 │ 10 Jan 26 08:42 UTC │                     │
	│ stop    │ -p scheduled-stop-701534 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-701534       │ jenkins │ v1.37.0 │ 10 Jan 26 08:42 UTC │                     │
	│ stop    │ -p scheduled-stop-701534 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-701534       │ jenkins │ v1.37.0 │ 10 Jan 26 08:42 UTC │ 10 Jan 26 08:43 UTC │
	│ delete  │ -p scheduled-stop-701534                                                                                                                 │ scheduled-stop-701534       │ jenkins │ v1.37.0 │ 10 Jan 26 08:43 UTC │ 10 Jan 26 08:43 UTC │
	│ start   │ -p insufficient-storage-221766 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                         │ insufficient-storage-221766 │ jenkins │ v1.37.0 │ 10 Jan 26 08:43 UTC │                     │
	│ delete  │ -p insufficient-storage-221766                                                                                                           │ insufficient-storage-221766 │ jenkins │ v1.37.0 │ 10 Jan 26 08:43 UTC │ 10 Jan 26 08:43 UTC │
	│ start   │ -p offline-crio-669446 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                        │ offline-crio-669446         │ jenkins │ v1.37.0 │ 10 Jan 26 08:43 UTC │ 10 Jan 26 08:44 UTC │
	│ start   │ -p pause-678123 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-678123                │ jenkins │ v1.37.0 │ 10 Jan 26 08:43 UTC │ 10 Jan 26 08:44 UTC │
	│ start   │ -p stopped-upgrade-761816 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-761816      │ jenkins │ v1.35.0 │ 10 Jan 26 08:43 UTC │ 10 Jan 26 08:44 UTC │
	│ start   │ -p missing-upgrade-854643 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-854643      │ jenkins │ v1.35.0 │ 10 Jan 26 08:43 UTC │ 10 Jan 26 08:44 UTC │
	│ stop    │ stopped-upgrade-761816 stop                                                                                                              │ stopped-upgrade-761816      │ jenkins │ v1.35.0 │ 10 Jan 26 08:44 UTC │ 10 Jan 26 08:44 UTC │
	│ start   │ -p missing-upgrade-854643 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-854643      │ jenkins │ v1.37.0 │ 10 Jan 26 08:44 UTC │                     │
	│ start   │ -p stopped-upgrade-761816 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-761816      │ jenkins │ v1.37.0 │ 10 Jan 26 08:44 UTC │                     │
	│ delete  │ -p offline-crio-669446                                                                                                                   │ offline-crio-669446         │ jenkins │ v1.37.0 │ 10 Jan 26 08:44 UTC │ 10 Jan 26 08:44 UTC │
	│ start   │ -p pause-678123 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-678123                │ jenkins │ v1.37.0 │ 10 Jan 26 08:44 UTC │ 10 Jan 26 08:45 UTC │
	│ start   │ -p kubernetes-upgrade-182534 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-182534   │ jenkins │ v1.37.0 │ 10 Jan 26 08:44 UTC │                     │
	│ pause   │ -p pause-678123 --alsologtostderr -v=5                                                                                                   │ pause-678123                │ jenkins │ v1.37.0 │ 10 Jan 26 08:45 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 08:44:56
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 08:44:56.336456  178486 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:44:56.336698  178486 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:44:56.336707  178486 out.go:374] Setting ErrFile to fd 2...
	I0110 08:44:56.336711  178486 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:44:56.336918  178486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:44:56.337384  178486 out.go:368] Setting JSON to false
	I0110 08:44:56.338345  178486 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1648,"bootTime":1768033048,"procs":276,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 08:44:56.338395  178486 start.go:143] virtualization: kvm guest
	I0110 08:44:56.340456  178486 out.go:179] * [kubernetes-upgrade-182534] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 08:44:56.341921  178486 notify.go:221] Checking for updates...
	I0110 08:44:56.341946  178486 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 08:44:56.343407  178486 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 08:44:56.344834  178486 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:44:56.346054  178486 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-3641/.minikube
	I0110 08:44:56.347295  178486 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 08:44:56.348795  178486 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 08:44:56.350586  178486 config.go:182] Loaded profile config "missing-upgrade-854643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0110 08:44:56.350762  178486 config.go:182] Loaded profile config "pause-678123": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:44:56.350865  178486 config.go:182] Loaded profile config "stopped-upgrade-761816": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0110 08:44:56.350972  178486 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 08:44:56.376670  178486 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 08:44:56.376997  178486 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:44:56.445954  178486 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:67 SystemTime:2026-01-10 08:44:56.435880465 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:44:56.446057  178486 docker.go:319] overlay module found
	I0110 08:44:56.447805  178486 out.go:179] * Using the docker driver based on user configuration
	I0110 08:44:56.449046  178486 start.go:309] selected driver: docker
	I0110 08:44:56.449065  178486 start.go:928] validating driver "docker" against <nil>
	I0110 08:44:56.449091  178486 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 08:44:56.449720  178486 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:44:56.506792  178486 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:67 SystemTime:2026-01-10 08:44:56.496994437 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:44:56.506969  178486 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 08:44:56.507203  178486 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0110 08:44:56.508809  178486 out.go:179] * Using Docker driver with root privileges
	I0110 08:44:56.510270  178486 cni.go:84] Creating CNI manager for ""
	I0110 08:44:56.510342  178486 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 08:44:56.510358  178486 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 08:44:56.510518  178486 start.go:353] cluster config:
	{Name:kubernetes-upgrade-182534 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-182534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:44:56.511904  178486 out.go:179] * Starting "kubernetes-upgrade-182534" primary control-plane node in "kubernetes-upgrade-182534" cluster
	I0110 08:44:56.513169  178486 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 08:44:56.514578  178486 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 08:44:56.515750  178486 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0110 08:44:56.515785  178486 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I0110 08:44:56.515792  178486 cache.go:65] Caching tarball of preloaded images
	I0110 08:44:56.515854  178486 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 08:44:56.515906  178486 preload.go:251] Found /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0110 08:44:56.515922  178486 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I0110 08:44:56.516019  178486 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/kubernetes-upgrade-182534/config.json ...
	I0110 08:44:56.516042  178486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/kubernetes-upgrade-182534/config.json: {Name:mk0fec41d6a9f462e0159b34bc1c15ae991f15c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:44:56.537585  178486 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 08:44:56.537601  178486 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 08:44:56.537616  178486 cache.go:243] Successfully downloaded all kic artifacts
	I0110 08:44:56.537647  178486 start.go:360] acquireMachinesLock for kubernetes-upgrade-182534: {Name:mk14b488606d391bb7eb08340862fd84e2a57eba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 08:44:56.537766  178486 start.go:364] duration metric: took 100.476µs to acquireMachinesLock for "kubernetes-upgrade-182534"
	I0110 08:44:56.537797  178486 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-182534 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-182534 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 08:44:56.537861  178486 start.go:125] createHost starting for "" (driver="docker")
	I0110 08:44:55.826854  175395 cli_runner.go:164] Run: docker container inspect missing-upgrade-854643 --format={{.State.Status}}
	W0110 08:44:55.845955  175395 cli_runner.go:211] docker container inspect missing-upgrade-854643 --format={{.State.Status}} returned with exit code 1
	I0110 08:44:55.846023  175395 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-854643": docker container inspect missing-upgrade-854643 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-854643
	I0110 08:44:55.846041  175395 oci.go:673] temporary error: container missing-upgrade-854643 status is  but expect it to be exited
	I0110 08:44:55.846074  175395 retry.go:84] will retry after 5.6s: couldn't verify container is exited. %v: unknown state "missing-upgrade-854643": docker container inspect missing-upgrade-854643 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-854643
	I0110 08:44:56.135810  175623 api_server.go:315] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0110 08:44:56.135866  175623 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0110 08:44:54.476543  177687 out.go:252] * Updating the running docker "pause-678123" container ...
	I0110 08:44:54.476578  177687 machine.go:94] provisionDockerMachine start ...
	I0110 08:44:54.476661  177687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-678123
	I0110 08:44:54.494226  177687 main.go:144] libmachine: Using SSH client type: native
	I0110 08:44:54.494465  177687 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 32963 <nil> <nil>}
	I0110 08:44:54.494477  177687 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 08:44:54.619276  177687 main.go:144] libmachine: SSH cmd err, output: <nil>: pause-678123
	
	I0110 08:44:54.619308  177687 ubuntu.go:182] provisioning hostname "pause-678123"
	I0110 08:44:54.619374  177687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-678123
	I0110 08:44:54.639106  177687 main.go:144] libmachine: Using SSH client type: native
	I0110 08:44:54.639324  177687 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 32963 <nil> <nil>}
	I0110 08:44:54.639337  177687 main.go:144] libmachine: About to run SSH command:
	sudo hostname pause-678123 && echo "pause-678123" | sudo tee /etc/hostname
	I0110 08:44:54.774787  177687 main.go:144] libmachine: SSH cmd err, output: <nil>: pause-678123
	
	I0110 08:44:54.774858  177687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-678123
	I0110 08:44:54.793013  177687 main.go:144] libmachine: Using SSH client type: native
	I0110 08:44:54.793228  177687 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 32963 <nil> <nil>}
	I0110 08:44:54.793244  177687 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-678123' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-678123/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-678123' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 08:44:54.920003  177687 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 08:44:54.920034  177687 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-3641/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-3641/.minikube}
	I0110 08:44:54.920141  177687 ubuntu.go:190] setting up certificates
	I0110 08:44:54.920162  177687 provision.go:84] configureAuth start
	I0110 08:44:54.920210  177687 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-678123
	I0110 08:44:54.938232  177687 provision.go:143] copyHostCerts
	I0110 08:44:54.938292  177687 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-3641/.minikube/key.pem, removing ...
	I0110 08:44:54.938308  177687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-3641/.minikube/key.pem
	I0110 08:44:54.938379  177687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-3641/.minikube/key.pem (1675 bytes)
	I0110 08:44:54.938476  177687 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-3641/.minikube/ca.pem, removing ...
	I0110 08:44:54.938485  177687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-3641/.minikube/ca.pem
	I0110 08:44:54.938513  177687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-3641/.minikube/ca.pem (1078 bytes)
	I0110 08:44:54.938587  177687 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-3641/.minikube/cert.pem, removing ...
	I0110 08:44:54.938595  177687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-3641/.minikube/cert.pem
	I0110 08:44:54.938618  177687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-3641/.minikube/cert.pem (1123 bytes)
	I0110 08:44:54.938674  177687 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-3641/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca-key.pem org=jenkins.pause-678123 san=[127.0.0.1 192.168.76.2 localhost minikube pause-678123]
	I0110 08:44:55.164707  177687 provision.go:177] copyRemoteCerts
	I0110 08:44:55.164775  177687 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 08:44:55.164824  177687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-678123
	I0110 08:44:55.183899  177687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32963 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/pause-678123/id_rsa Username:docker}
	I0110 08:44:55.276617  177687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0110 08:44:55.294870  177687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 08:44:55.312378  177687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0110 08:44:55.331291  177687 provision.go:87] duration metric: took 411.10883ms to configureAuth
	I0110 08:44:55.331324  177687 ubuntu.go:206] setting minikube options for container-runtime
	I0110 08:44:55.331578  177687 config.go:182] Loaded profile config "pause-678123": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:44:55.331673  177687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-678123
	I0110 08:44:55.350300  177687 main.go:144] libmachine: Using SSH client type: native
	I0110 08:44:55.350509  177687 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 32963 <nil> <nil>}
	I0110 08:44:55.350531  177687 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 08:44:55.669432  177687 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 08:44:55.669476  177687 machine.go:97] duration metric: took 1.192867179s to provisionDockerMachine
	I0110 08:44:55.669491  177687 start.go:293] postStartSetup for "pause-678123" (driver="docker")
	I0110 08:44:55.669504  177687 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 08:44:55.669564  177687 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 08:44:55.669609  177687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-678123
	I0110 08:44:55.688376  177687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32963 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/pause-678123/id_rsa Username:docker}
	I0110 08:44:55.797471  177687 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 08:44:55.801121  177687 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 08:44:55.801144  177687 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 08:44:55.801155  177687 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-3641/.minikube/addons for local assets ...
	I0110 08:44:55.801218  177687 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-3641/.minikube/files for local assets ...
	I0110 08:44:55.801311  177687 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-3641/.minikube/files/etc/ssl/certs/71832.pem -> 71832.pem in /etc/ssl/certs
	I0110 08:44:55.801428  177687 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 08:44:55.809197  177687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/files/etc/ssl/certs/71832.pem --> /etc/ssl/certs/71832.pem (1708 bytes)
	I0110 08:44:55.826784  177687 start.go:296] duration metric: took 157.278957ms for postStartSetup
	I0110 08:44:55.826862  177687 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 08:44:55.826936  177687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-678123
	I0110 08:44:55.846426  177687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32963 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/pause-678123/id_rsa Username:docker}
	I0110 08:44:55.937306  177687 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 08:44:55.942141  177687 fix.go:56] duration metric: took 1.485814638s for fixHost
	I0110 08:44:55.942167  177687 start.go:83] releasing machines lock for "pause-678123", held for 1.485859602s
	I0110 08:44:55.942242  177687 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-678123
	I0110 08:44:55.960706  177687 ssh_runner.go:195] Run: cat /version.json
	I0110 08:44:55.960768  177687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-678123
	I0110 08:44:55.960823  177687 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 08:44:55.960913  177687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-678123
	I0110 08:44:55.981000  177687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32963 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/pause-678123/id_rsa Username:docker}
	I0110 08:44:55.981239  177687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32963 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/pause-678123/id_rsa Username:docker}
	I0110 08:44:56.071043  177687 ssh_runner.go:195] Run: systemctl --version
	I0110 08:44:56.133749  177687 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 08:44:56.171662  177687 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 08:44:56.176441  177687 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 08:44:56.176519  177687 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 08:44:56.184231  177687 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 08:44:56.184252  177687 start.go:496] detecting cgroup driver to use...
	I0110 08:44:56.184291  177687 detect.go:178] detected "systemd" cgroup driver on host os
	I0110 08:44:56.184337  177687 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 08:44:56.199149  177687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 08:44:56.212304  177687 docker.go:218] disabling cri-docker service (if available) ...
	I0110 08:44:56.212357  177687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 08:44:56.227425  177687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 08:44:56.241865  177687 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 08:44:56.370370  177687 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 08:44:56.495973  177687 docker.go:234] disabling docker service ...
	I0110 08:44:56.496049  177687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 08:44:56.513010  177687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 08:44:56.525646  177687 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 08:44:56.642236  177687 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 08:44:56.760125  177687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 08:44:56.773423  177687 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 08:44:56.788361  177687 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 08:44:56.788525  177687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:44:56.798187  177687 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0110 08:44:56.798248  177687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:44:56.807354  177687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:44:56.818101  177687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:44:56.827368  177687 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 08:44:56.835999  177687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:44:56.847456  177687 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:44:56.856521  177687 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:44:56.865884  177687 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 08:44:56.873842  177687 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 08:44:56.882272  177687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:44:56.996326  177687 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 08:44:57.191630  177687 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 08:44:57.191694  177687 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 08:44:57.195796  177687 start.go:574] Will wait 60s for crictl version
	I0110 08:44:57.195855  177687 ssh_runner.go:195] Run: which crictl
	I0110 08:44:57.199952  177687 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 08:44:57.231214  177687 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 08:44:57.231324  177687 ssh_runner.go:195] Run: crio --version
	I0110 08:44:57.261387  177687 ssh_runner.go:195] Run: crio --version
	I0110 08:44:57.297887  177687 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 08:44:57.299070  177687 cli_runner.go:164] Run: docker network inspect pause-678123 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 08:44:57.317503  177687 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 08:44:57.322271  177687 kubeadm.go:884] updating cluster {Name:pause-678123 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-678123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 08:44:57.322451  177687 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 08:44:57.322507  177687 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 08:44:57.360242  177687 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 08:44:57.360262  177687 crio.go:433] Images already preloaded, skipping extraction
	I0110 08:44:57.360304  177687 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 08:44:57.389169  177687 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 08:44:57.389201  177687 cache_images.go:86] Images are preloaded, skipping loading
	I0110 08:44:57.389212  177687 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I0110 08:44:57.389348  177687 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-678123 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:pause-678123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 08:44:57.389439  177687 ssh_runner.go:195] Run: crio config
	I0110 08:44:57.441340  177687 cni.go:84] Creating CNI manager for ""
	I0110 08:44:57.441358  177687 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 08:44:57.441371  177687 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 08:44:57.441391  177687 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-678123 NodeName:pause-678123 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 08:44:57.441492  177687 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-678123"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 08:44:57.441545  177687 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 08:44:57.449767  177687 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 08:44:57.449829  177687 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 08:44:57.458090  177687 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0110 08:44:57.470611  177687 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 08:44:57.483532  177687 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I0110 08:44:57.496458  177687 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 08:44:57.500670  177687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:44:57.617473  177687 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 08:44:57.631885  177687 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/pause-678123 for IP: 192.168.76.2
	I0110 08:44:57.631908  177687 certs.go:195] generating shared ca certs ...
	I0110 08:44:57.631922  177687 certs.go:227] acquiring lock for ca certs: {Name:mk00e261408d0e9fd9be39128613c5110a764de2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:44:57.632049  177687 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-3641/.minikube/ca.key
	I0110 08:44:57.632087  177687 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-3641/.minikube/proxy-client-ca.key
	I0110 08:44:57.632096  177687 certs.go:257] generating profile certs ...
	I0110 08:44:57.632172  177687 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/pause-678123/client.key
	I0110 08:44:57.632228  177687 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/pause-678123/apiserver.key.a35dd3da
	I0110 08:44:57.632262  177687 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/pause-678123/proxy-client.key
	I0110 08:44:57.632355  177687 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/7183.pem (1338 bytes)
	W0110 08:44:57.632385  177687 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-3641/.minikube/certs/7183_empty.pem, impossibly tiny 0 bytes
	I0110 08:44:57.632394  177687 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca-key.pem (1675 bytes)
	I0110 08:44:57.632418  177687 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem (1078 bytes)
	I0110 08:44:57.632442  177687 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/cert.pem (1123 bytes)
	I0110 08:44:57.632465  177687 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/key.pem (1675 bytes)
	I0110 08:44:57.632510  177687 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/files/etc/ssl/certs/71832.pem (1708 bytes)
	I0110 08:44:57.633039  177687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 08:44:57.652375  177687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0110 08:44:57.670367  177687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 08:44:57.687338  177687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 08:44:57.705152  177687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/pause-678123/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0110 08:44:57.724073  177687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/pause-678123/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0110 08:44:57.742393  177687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/pause-678123/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 08:44:57.761146  177687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/pause-678123/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 08:44:57.778619  177687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/certs/7183.pem --> /usr/share/ca-certificates/7183.pem (1338 bytes)
	I0110 08:44:57.796126  177687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/files/etc/ssl/certs/71832.pem --> /usr/share/ca-certificates/71832.pem (1708 bytes)
	I0110 08:44:57.813792  177687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 08:44:57.830938  177687 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 08:44:57.843887  177687 ssh_runner.go:195] Run: openssl version
	I0110 08:44:57.850264  177687 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:44:57.858451  177687 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 08:44:57.865955  177687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:44:57.869804  177687 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:44:57.869860  177687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:44:57.908197  177687 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 08:44:57.916327  177687 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7183.pem
	I0110 08:44:57.924010  177687 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7183.pem /etc/ssl/certs/7183.pem
	I0110 08:44:57.932002  177687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7183.pem
	I0110 08:44:57.935627  177687 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 08:23 /usr/share/ca-certificates/7183.pem
	I0110 08:44:57.935677  177687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7183.pem
	I0110 08:44:57.970001  177687 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 08:44:57.978291  177687 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/71832.pem
	I0110 08:44:57.986114  177687 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/71832.pem /etc/ssl/certs/71832.pem
	I0110 08:44:57.993915  177687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71832.pem
	I0110 08:44:57.997885  177687 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 08:23 /usr/share/ca-certificates/71832.pem
	I0110 08:44:57.997959  177687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71832.pem
	I0110 08:44:58.032265  177687 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 08:44:58.039944  177687 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 08:44:58.043855  177687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 08:44:58.078414  177687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 08:44:58.113313  177687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 08:44:58.147640  177687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 08:44:58.181616  177687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 08:44:58.215668  177687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 08:44:58.250692  177687 kubeadm.go:401] StartCluster: {Name:pause-678123 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-678123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:44:58.250851  177687 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 08:44:58.250944  177687 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 08:44:58.282709  177687 cri.go:96] found id: "f3ea8d5598a5ea9d8e3a390c18f790444ae5cb11b7c0226f0b144b9d05c83a04"
	I0110 08:44:58.282743  177687 cri.go:96] found id: "5caf2e27402c6bf30d53a931012ff5849fde4125867d91d0913fce93b68c0021"
	I0110 08:44:58.282750  177687 cri.go:96] found id: "0ec28ff9a5a230ef38b5b4a4e2fb64fdcf59fa83b33592705b9c7c3586711f1c"
	I0110 08:44:58.282756  177687 cri.go:96] found id: "1206eca28b9b971a40e042d2cbfbee5210ee9fec259b792781c02e55620e92db"
	I0110 08:44:58.282761  177687 cri.go:96] found id: "855477d18a3a24c8ce5384ee94e9fbbf34b25e8c7221d8828b4fbb3bbb98e8b5"
	I0110 08:44:58.282766  177687 cri.go:96] found id: "2b24ef461e6b2db4c3746849bf15b7a5bb7ede8d1ad7b23c1d938ec8e945e86d"
	I0110 08:44:58.282770  177687 cri.go:96] found id: "f36022cb1f6ebf0fb589be28e6c4a599c94aa3ed350eedaa9ae426ea44cf5016"
	I0110 08:44:58.282774  177687 cri.go:96] found id: ""
	I0110 08:44:58.282816  177687 ssh_runner.go:195] Run: sudo runc list -f json
	W0110 08:44:58.296832  177687 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:44:58Z" level=error msg="open /run/runc: no such file or directory"
	I0110 08:44:58.296911  177687 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 08:44:58.305178  177687 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 08:44:58.305199  177687 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 08:44:58.305250  177687 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 08:44:58.312660  177687 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 08:44:58.313627  177687 kubeconfig.go:125] found "pause-678123" server: "https://192.168.76.2:8443"
	I0110 08:44:58.314996  177687 kapi.go:59] client config for pause-678123: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22427-3641/.minikube/profiles/pause-678123/client.crt", KeyFile:"/home/jenkins/minikube-integration/22427-3641/.minikube/profiles/pause-678123/client.key", CAFile:"/home/jenkins/minikube-integration/22427-3641/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f75c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0110 08:44:58.315543  177687 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0110 08:44:58.315565  177687 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I0110 08:44:58.315573  177687 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I0110 08:44:58.315579  177687 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I0110 08:44:58.315590  177687 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0110 08:44:58.315602  177687 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0110 08:44:58.316078  177687 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 08:44:58.323825  177687 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I0110 08:44:58.323860  177687 kubeadm.go:602] duration metric: took 18.654348ms to restartPrimaryControlPlane
	I0110 08:44:58.323870  177687 kubeadm.go:403] duration metric: took 73.191474ms to StartCluster
	I0110 08:44:58.323887  177687 settings.go:142] acquiring lock: {Name:mkbb32fc6441ceb31ce2923ea8999f8375298f08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:44:58.323958  177687 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:44:58.324944  177687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/kubeconfig: {Name:mk0d29b3b0ee1fd71729aff31f901be50a2d0664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:44:58.325155  177687 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 08:44:58.325278  177687 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 08:44:58.325374  177687 config.go:182] Loaded profile config "pause-678123": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:44:58.331942  177687 out.go:179] * Verifying Kubernetes components...
	I0110 08:44:58.331949  177687 out.go:179] * Enabled addons: 
	I0110 08:44:58.333451  177687 addons.go:530] duration metric: took 8.183817ms for enable addons: enabled=[]
	I0110 08:44:58.333509  177687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:44:58.449491  177687 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 08:44:58.462451  177687 node_ready.go:35] waiting up to 6m0s for node "pause-678123" to be "Ready" ...
	I0110 08:44:58.470107  177687 node_ready.go:49] node "pause-678123" is "Ready"
	I0110 08:44:58.470129  177687 node_ready.go:38] duration metric: took 7.643111ms for node "pause-678123" to be "Ready" ...
	I0110 08:44:58.470141  177687 api_server.go:52] waiting for apiserver process to appear ...
	I0110 08:44:58.470189  177687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 08:44:58.481319  177687 api_server.go:72] duration metric: took 156.137719ms to wait for apiserver process to appear ...
	I0110 08:44:58.481340  177687 api_server.go:88] waiting for apiserver healthz status ...
	I0110 08:44:58.481363  177687 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 08:44:58.485985  177687 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0110 08:44:58.487163  177687 api_server.go:141] control plane version: v1.35.0
	I0110 08:44:58.487184  177687 api_server.go:131] duration metric: took 5.837916ms to wait for apiserver health ...
	I0110 08:44:58.487192  177687 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 08:44:58.490151  177687 system_pods.go:59] 7 kube-system pods found
	I0110 08:44:58.490173  177687 system_pods.go:61] "coredns-7d764666f9-5f9zf" [79d1ba64-372e-4517-8266-1498a7d0ae38] Running
	I0110 08:44:58.490178  177687 system_pods.go:61] "etcd-pause-678123" [58afbb3d-a513-41f1-a417-397c84c5e698] Running
	I0110 08:44:58.490182  177687 system_pods.go:61] "kindnet-tpclh" [ed69837a-7d90-4463-b544-c354590fc785] Running
	I0110 08:44:58.490186  177687 system_pods.go:61] "kube-apiserver-pause-678123" [d2ef5672-6a05-41ff-9ca0-b5248d7eb1b2] Running
	I0110 08:44:58.490189  177687 system_pods.go:61] "kube-controller-manager-pause-678123" [90a8e78c-ab51-47a5-9701-096c210da6ac] Running
	I0110 08:44:58.490194  177687 system_pods.go:61] "kube-proxy-tp5db" [92ff064a-cbcf-4754-924c-2b7be0b8d914] Running
	I0110 08:44:58.490198  177687 system_pods.go:61] "kube-scheduler-pause-678123" [2e424799-163f-4021-b178-857562dfce89] Running
	I0110 08:44:58.490202  177687 system_pods.go:74] duration metric: took 3.005359ms to wait for pod list to return data ...
	I0110 08:44:58.490212  177687 default_sa.go:34] waiting for default service account to be created ...
	I0110 08:44:58.492037  177687 default_sa.go:45] found service account: "default"
	I0110 08:44:58.492057  177687 default_sa.go:55] duration metric: took 1.839726ms for default service account to be created ...
	I0110 08:44:58.492065  177687 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 08:44:58.494882  177687 system_pods.go:86] 7 kube-system pods found
	I0110 08:44:58.494905  177687 system_pods.go:89] "coredns-7d764666f9-5f9zf" [79d1ba64-372e-4517-8266-1498a7d0ae38] Running
	I0110 08:44:58.494910  177687 system_pods.go:89] "etcd-pause-678123" [58afbb3d-a513-41f1-a417-397c84c5e698] Running
	I0110 08:44:58.494913  177687 system_pods.go:89] "kindnet-tpclh" [ed69837a-7d90-4463-b544-c354590fc785] Running
	I0110 08:44:58.494916  177687 system_pods.go:89] "kube-apiserver-pause-678123" [d2ef5672-6a05-41ff-9ca0-b5248d7eb1b2] Running
	I0110 08:44:58.494921  177687 system_pods.go:89] "kube-controller-manager-pause-678123" [90a8e78c-ab51-47a5-9701-096c210da6ac] Running
	I0110 08:44:58.494925  177687 system_pods.go:89] "kube-proxy-tp5db" [92ff064a-cbcf-4754-924c-2b7be0b8d914] Running
	I0110 08:44:58.494928  177687 system_pods.go:89] "kube-scheduler-pause-678123" [2e424799-163f-4021-b178-857562dfce89] Running
	I0110 08:44:58.494933  177687 system_pods.go:126] duration metric: took 2.864173ms to wait for k8s-apps to be running ...
	I0110 08:44:58.494942  177687 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 08:44:58.494981  177687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:44:58.508578  177687 system_svc.go:56] duration metric: took 13.624562ms WaitForService to wait for kubelet
	I0110 08:44:58.508609  177687 kubeadm.go:587] duration metric: took 183.430061ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 08:44:58.508624  177687 node_conditions.go:102] verifying NodePressure condition ...
	I0110 08:44:58.511443  177687 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0110 08:44:58.511471  177687 node_conditions.go:123] node cpu capacity is 8
	I0110 08:44:58.511487  177687 node_conditions.go:105] duration metric: took 2.858938ms to run NodePressure ...
	I0110 08:44:58.511502  177687 start.go:242] waiting for startup goroutines ...
	I0110 08:44:58.511514  177687 start.go:247] waiting for cluster config update ...
	I0110 08:44:58.511525  177687 start.go:256] writing updated cluster config ...
	I0110 08:44:58.511885  177687 ssh_runner.go:195] Run: rm -f paused
	I0110 08:44:58.515793  177687 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 08:44:58.516371  177687 kapi.go:59] client config for pause-678123: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22427-3641/.minikube/profiles/pause-678123/client.crt", KeyFile:"/home/jenkins/minikube-integration/22427-3641/.minikube/profiles/pause-678123/client.key", CAFile:"/home/jenkins/minikube-integration/22427-3641/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f75c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0110 08:44:58.519129  177687 pod_ready.go:83] waiting for pod "coredns-7d764666f9-5f9zf" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:44:58.523005  177687 pod_ready.go:94] pod "coredns-7d764666f9-5f9zf" is "Ready"
	I0110 08:44:58.523028  177687 pod_ready.go:86] duration metric: took 3.864853ms for pod "coredns-7d764666f9-5f9zf" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:44:58.524788  177687 pod_ready.go:83] waiting for pod "etcd-pause-678123" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:44:58.528515  177687 pod_ready.go:94] pod "etcd-pause-678123" is "Ready"
	I0110 08:44:58.528535  177687 pod_ready.go:86] duration metric: took 3.731327ms for pod "etcd-pause-678123" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:44:58.530147  177687 pod_ready.go:83] waiting for pod "kube-apiserver-pause-678123" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:44:58.533788  177687 pod_ready.go:94] pod "kube-apiserver-pause-678123" is "Ready"
	I0110 08:44:58.533813  177687 pod_ready.go:86] duration metric: took 3.645501ms for pod "kube-apiserver-pause-678123" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:44:58.535645  177687 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-678123" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:44:58.919866  177687 pod_ready.go:94] pod "kube-controller-manager-pause-678123" is "Ready"
	I0110 08:44:58.919903  177687 pod_ready.go:86] duration metric: took 384.240295ms for pod "kube-controller-manager-pause-678123" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:44:59.119780  177687 pod_ready.go:83] waiting for pod "kube-proxy-tp5db" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:44:59.520117  177687 pod_ready.go:94] pod "kube-proxy-tp5db" is "Ready"
	I0110 08:44:59.520141  177687 pod_ready.go:86] duration metric: took 400.334702ms for pod "kube-proxy-tp5db" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:44:59.720311  177687 pod_ready.go:83] waiting for pod "kube-scheduler-pause-678123" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:45:00.120564  177687 pod_ready.go:94] pod "kube-scheduler-pause-678123" is "Ready"
	I0110 08:45:00.120590  177687 pod_ready.go:86] duration metric: took 400.251897ms for pod "kube-scheduler-pause-678123" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:45:00.120602  177687 pod_ready.go:40] duration metric: took 1.604783538s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 08:45:00.163434  177687 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0110 08:45:00.324859  177687 out.go:179] * Done! kubectl is now configured to use "pause-678123" cluster and "default" namespace by default
	I0110 08:44:56.539805  178486 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 08:44:56.540016  178486 start.go:159] libmachine.API.Create for "kubernetes-upgrade-182534" (driver="docker")
	I0110 08:44:56.540043  178486 client.go:173] LocalClient.Create starting
	I0110 08:44:56.540100  178486 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem
	I0110 08:44:56.540129  178486 main.go:144] libmachine: Decoding PEM data...
	I0110 08:44:56.540147  178486 main.go:144] libmachine: Parsing certificate...
	I0110 08:44:56.540207  178486 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-3641/.minikube/certs/cert.pem
	I0110 08:44:56.540226  178486 main.go:144] libmachine: Decoding PEM data...
	I0110 08:44:56.540238  178486 main.go:144] libmachine: Parsing certificate...
	I0110 08:44:56.540526  178486 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-182534 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 08:44:56.560611  178486 cli_runner.go:211] docker network inspect kubernetes-upgrade-182534 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 08:44:56.560710  178486 network_create.go:284] running [docker network inspect kubernetes-upgrade-182534] to gather additional debugging logs...
	I0110 08:44:56.560730  178486 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-182534
	W0110 08:44:56.580481  178486 cli_runner.go:211] docker network inspect kubernetes-upgrade-182534 returned with exit code 1
	I0110 08:44:56.580511  178486 network_create.go:287] error running [docker network inspect kubernetes-upgrade-182534]: docker network inspect kubernetes-upgrade-182534: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-182534 not found
	I0110 08:44:56.580529  178486 network_create.go:289] output of [docker network inspect kubernetes-upgrade-182534]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-182534 not found
	
	** /stderr **
	I0110 08:44:56.580647  178486 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 08:44:56.597864  178486 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9da35691088c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:0a:0c:fc:dc:fc:2f} reservation:<nil>}
	I0110 08:44:56.598441  178486 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5ce9d5913249 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:11:5d:21:c0:0b} reservation:<nil>}
	I0110 08:44:56.598974  178486 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-73a46a53fce2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:be:e8:cf:3a:03:99} reservation:<nil>}
	I0110 08:44:56.599497  178486 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-da73e98c01a2 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ee:ca:e6:c3:18:ee} reservation:<nil>}
	I0110 08:44:56.600248  178486 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ea5fb0}
	I0110 08:44:56.600286  178486 network_create.go:124] attempt to create docker network kubernetes-upgrade-182534 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0110 08:44:56.600340  178486 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-182534 kubernetes-upgrade-182534
	I0110 08:44:56.647640  178486 network_create.go:108] docker network kubernetes-upgrade-182534 192.168.85.0/24 created
	I0110 08:44:56.647669  178486 kic.go:121] calculated static IP "192.168.85.2" for the "kubernetes-upgrade-182534" container
	I0110 08:44:56.647726  178486 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 08:44:56.665000  178486 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-182534 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-182534 --label created_by.minikube.sigs.k8s.io=true
	I0110 08:44:56.689297  178486 oci.go:103] Successfully created a docker volume kubernetes-upgrade-182534
	I0110 08:44:56.689361  178486 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-182534-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-182534 --entrypoint /usr/bin/test -v kubernetes-upgrade-182534:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 08:44:57.088491  178486 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-182534
	I0110 08:44:57.088569  178486 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0110 08:44:57.088583  178486 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 08:44:57.088660  178486 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-182534:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 08:45:01.427365  175395 cli_runner.go:164] Run: docker container inspect missing-upgrade-854643 --format={{.State.Status}}
	W0110 08:45:01.446038  175395 cli_runner.go:211] docker container inspect missing-upgrade-854643 --format={{.State.Status}} returned with exit code 1
	I0110 08:45:01.446129  175395 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-854643": docker container inspect missing-upgrade-854643 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-854643
	I0110 08:45:01.446153  175395 oci.go:673] temporary error: container missing-upgrade-854643 status is  but expect it to be exited
	I0110 08:45:01.446193  175395 oci.go:88] couldn't shut down missing-upgrade-854643 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-854643": docker container inspect missing-upgrade-854643 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-854643
	 
	I0110 08:45:01.446250  175395 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-854643
	I0110 08:45:01.466231  175395 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-854643
	W0110 08:45:01.483150  175395 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-854643 returned with exit code 1
	I0110 08:45:01.483248  175395 cli_runner.go:164] Run: docker network inspect missing-upgrade-854643 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 08:45:01.501433  175395 cli_runner.go:164] Run: docker network rm missing-upgrade-854643
	I0110 08:45:01.138207  175623 api_server.go:315] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0110 08:45:01.138243  175623 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	
	
	==> CRI-O <==
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.103880333Z" level=info msg="RDT not available in the host system"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.103896238Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.104790868Z" level=info msg="Conmon does support the --sync option"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.104810294Z" level=info msg="Conmon does support the --log-global-size-max option"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.104827073Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.105615644Z" level=info msg="Conmon does support the --sync option"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.105629718Z" level=info msg="Conmon does support the --log-global-size-max option"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.110481909Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.110504943Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.11119441Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hook
s.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_m
appings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n        container_create_timeout = 240\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n        container_create_timeout = 240\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enf
orcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio
.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.1116253Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.111694289Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.186910664Z" level=info msg="Got pod network &{Name:coredns-7d764666f9-5f9zf Namespace:kube-system ID:d8baea648f36c8e8ac41165012ab62cdd571c4fb738512195db10981f9bd5769 UID:79d1ba64-372e-4517-8266-1498a7d0ae38 NetNS:/var/run/netns/50ed1fa3-3f4d-478a-9cd5-3689f535d17f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000ce4068}] Aliases:map[]}"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.187167996Z" level=info msg="Checking pod kube-system_coredns-7d764666f9-5f9zf for CNI network kindnet (type=ptp)"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.187601986Z" level=info msg="Registered SIGHUP reload watcher"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.187631543Z" level=info msg="Starting seccomp notifier watcher"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.187674328Z" level=info msg="Create NRI interface"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.187800651Z" level=info msg="built-in NRI default validator is disabled"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.187811634Z" level=info msg="runtime interface created"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.187823586Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.187828821Z" level=info msg="runtime interface starting up..."
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.187834286Z" level=info msg="starting plugins..."
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.187845738Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.188178202Z" level=info msg="No systemd watchdog enabled"
	Jan 10 08:44:57 pause-678123 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	f3ea8d5598a5e       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                     11 seconds ago      Running             coredns                   0                   d8baea648f36c       coredns-7d764666f9-5f9zf               kube-system
	5caf2e27402c6       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27   22 seconds ago      Running             kindnet-cni               0                   7eaf71ea67189       kindnet-tpclh                          kube-system
	0ec28ff9a5a23       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                     25 seconds ago      Running             kube-proxy                0                   baa0a42e723c6       kube-proxy-tp5db                       kube-system
	1206eca28b9b9       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                     35 seconds ago      Running             kube-scheduler            0                   769017a846a13       kube-scheduler-pause-678123            kube-system
	855477d18a3a2       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                     35 seconds ago      Running             kube-controller-manager   0                   98ec61b9fc2b3       kube-controller-manager-pause-678123   kube-system
	2b24ef461e6b2       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                     35 seconds ago      Running             kube-apiserver            0                   365dfba9461a5       kube-apiserver-pause-678123            kube-system
	f36022cb1f6eb       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                     35 seconds ago      Running             etcd                      0                   2438073bfc645       etcd-pause-678123                      kube-system
	
	
	==> coredns [f3ea8d5598a5ea9d8e3a390c18f790444ae5cb11b7c0226f0b144b9d05c83a04] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:50957 - 23744 "HINFO IN 8587846740084741196.5300764566522905185. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.017452803s
	
	
	==> describe nodes <==
	Name:               pause-678123
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-678123
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee
	                    minikube.k8s.io/name=pause-678123
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T08_44_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 08:44:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-678123
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 08:44:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 08:44:52 +0000   Sat, 10 Jan 2026 08:44:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 08:44:52 +0000   Sat, 10 Jan 2026 08:44:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 08:44:52 +0000   Sat, 10 Jan 2026 08:44:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 08:44:52 +0000   Sat, 10 Jan 2026 08:44:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-678123
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 73d0d7ed7bc783a051b563196960b035
	  System UUID:                10e1629b-90af-4050-8a44-19154b5a5b56
	  Boot ID:                    7c12f492-59b6-440b-986c-741ece916a23
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-5f9zf                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-pause-678123                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-tpclh                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-pause-678123             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-pause-678123    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-tp5db                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-pause-678123             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  27s   node-controller  Node pause-678123 event: Registered Node pause-678123 in Controller
	
	
	==> dmesg <==
	[Jan10 08:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001659] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.404004] i8042: Warning: Keylock active
	[  +0.021255] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.508728] block sda: the capability attribute has been deprecated.
	[  +0.091638] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026443] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.290756] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [f36022cb1f6ebf0fb589be28e6c4a599c94aa3ed350eedaa9ae426ea44cf5016] <==
	{"level":"info","ts":"2026-01-10T08:44:28.051719Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T08:44:28.598322Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2026-01-10T08:44:28.598382Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2026-01-10T08:44:28.598456Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2026-01-10T08:44:28.598481Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T08:44:28.598499Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2026-01-10T08:44:28.599277Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T08:44:28.599365Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T08:44:28.599398Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2026-01-10T08:44:28.599411Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T08:44:28.600139Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-678123 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T08:44:28.600187Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T08:44:28.600217Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T08:44:28.600350Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T08:44:28.600464Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T08:44:28.600491Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T08:44:28.601539Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T08:44:28.601525Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T08:44:28.601647Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T08:44:28.601549Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T08:44:28.601716Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T08:44:28.601762Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-10T08:44:28.602031Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2026-01-10T08:44:28.606592Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T08:44:28.606669Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 08:45:03 up 27 min,  0 user,  load average: 5.09, 2.34, 1.52
	Linux pause-678123 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5caf2e27402c6bf30d53a931012ff5849fde4125867d91d0913fce93b68c0021] <==
	I0110 08:44:40.307854       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 08:44:40.308271       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0110 08:44:40.308411       1 main.go:148] setting mtu 1500 for CNI 
	I0110 08:44:40.308432       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 08:44:40.308451       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T08:44:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 08:44:40.599035       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 08:44:40.599069       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 08:44:40.599080       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 08:44:40.599195       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 08:44:41.199250       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 08:44:41.199281       1 metrics.go:72] Registering metrics
	I0110 08:44:41.199367       1 controller.go:711] "Syncing nftables rules"
	I0110 08:44:50.512823       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 08:44:50.512913       1 main.go:301] handling current node
	I0110 08:45:00.516229       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 08:45:00.516270       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2b24ef461e6b2db4c3746849bf15b7a5bb7ede8d1ad7b23c1d938ec8e945e86d] <==
	I0110 08:44:29.875772       1 shared_informer.go:377] "Caches are synced"
	I0110 08:44:29.875860       1 policy_source.go:248] refreshing policies
	E0110 08:44:29.879305       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I0110 08:44:29.926668       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 08:44:29.950177       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 08:44:29.950309       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I0110 08:44:29.954453       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 08:44:30.031809       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 08:44:30.730895       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I0110 08:44:30.734675       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I0110 08:44:30.734695       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 08:44:31.195458       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 08:44:31.229411       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 08:44:31.337778       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0110 08:44:31.343399       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0110 08:44:31.344603       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 08:44:31.349305       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 08:44:31.754439       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 08:44:32.413851       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 08:44:32.425094       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0110 08:44:32.433925       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0110 08:44:37.208386       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 08:44:37.212396       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 08:44:37.406028       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 08:44:37.605322       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [855477d18a3a24c8ce5384ee94e9fbbf34b25e8c7221d8828b4fbb3bbb98e8b5] <==
	I0110 08:44:36.561920       1 shared_informer.go:377] "Caches are synced"
	I0110 08:44:36.561986       1 shared_informer.go:377] "Caches are synced"
	I0110 08:44:36.562160       1 shared_informer.go:377] "Caches are synced"
	I0110 08:44:36.562172       1 shared_informer.go:377] "Caches are synced"
	I0110 08:44:36.562196       1 shared_informer.go:377] "Caches are synced"
	I0110 08:44:36.562217       1 shared_informer.go:377] "Caches are synced"
	I0110 08:44:36.562233       1 shared_informer.go:377] "Caches are synced"
	I0110 08:44:36.562257       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I0110 08:44:36.562307       1 shared_informer.go:377] "Caches are synced"
	I0110 08:44:36.562392       1 shared_informer.go:377] "Caches are synced"
	I0110 08:44:36.562458       1 shared_informer.go:377] "Caches are synced"
	I0110 08:44:36.562502       1 shared_informer.go:377] "Caches are synced"
	I0110 08:44:36.562583       1 shared_informer.go:377] "Caches are synced"
	I0110 08:44:36.562650       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-678123"
	I0110 08:44:36.562008       1 shared_informer.go:377] "Caches are synced"
	I0110 08:44:36.562727       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0110 08:44:36.563245       1 shared_informer.go:377] "Caches are synced"
	I0110 08:44:36.567707       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:44:36.570428       1 range_allocator.go:433] "Set node PodCIDR" node="pause-678123" podCIDRs=["10.244.0.0/24"]
	I0110 08:44:36.584778       1 shared_informer.go:377] "Caches are synced"
	I0110 08:44:36.661628       1 shared_informer.go:377] "Caches are synced"
	I0110 08:44:36.661647       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 08:44:36.661653       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 08:44:36.668130       1 shared_informer.go:377] "Caches are synced"
	I0110 08:44:51.564038       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [0ec28ff9a5a230ef38b5b4a4e2fb64fdcf59fa83b33592705b9c7c3586711f1c] <==
	I0110 08:44:38.101696       1 server_linux.go:53] "Using iptables proxy"
	I0110 08:44:38.185224       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:44:38.286207       1 shared_informer.go:377] "Caches are synced"
	I0110 08:44:38.286303       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0110 08:44:38.286456       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 08:44:38.337577       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 08:44:38.351860       1 server_linux.go:136] "Using iptables Proxier"
	I0110 08:44:38.388352       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 08:44:38.389677       1 server.go:529] "Version info" version="v1.35.0"
	I0110 08:44:38.389991       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 08:44:38.391630       1 config.go:200] "Starting service config controller"
	I0110 08:44:38.394283       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 08:44:38.393280       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 08:44:38.394442       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 08:44:38.393204       1 config.go:106] "Starting endpoint slice config controller"
	I0110 08:44:38.394494       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 08:44:38.392856       1 config.go:309] "Starting node config controller"
	I0110 08:44:38.394541       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 08:44:38.394564       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 08:44:38.495289       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0110 08:44:38.495362       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 08:44:38.495396       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [1206eca28b9b971a40e042d2cbfbee5210ee9fec259b792781c02e55620e92db] <==
	E0110 08:44:29.792575       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0110 08:44:29.792522       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0110 08:44:29.793022       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0110 08:44:29.793076       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0110 08:44:29.793120       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0110 08:44:29.793246       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0110 08:44:29.793336       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0110 08:44:29.793554       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0110 08:44:29.793675       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0110 08:44:29.794100       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0110 08:44:29.794276       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 08:44:29.794331       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0110 08:44:29.794346       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0110 08:44:30.616575       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0110 08:44:30.703335       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0110 08:44:30.761581       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0110 08:44:30.767013       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0110 08:44:30.836262       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E0110 08:44:30.856511       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0110 08:44:30.879002       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0110 08:44:30.881800       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0110 08:44:30.920377       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0110 08:44:30.964878       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0110 08:44:30.985102       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	I0110 08:44:32.784980       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 08:44:37 pause-678123 kubelet[1280]: I0110 08:44:37.683712    1280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlldx\" (UniqueName: \"kubernetes.io/projected/92ff064a-cbcf-4754-924c-2b7be0b8d914-kube-api-access-tlldx\") pod \"kube-proxy-tp5db\" (UID: \"92ff064a-cbcf-4754-924c-2b7be0b8d914\") " pod="kube-system/kube-proxy-tp5db"
	Jan 10 08:44:37 pause-678123 kubelet[1280]: I0110 08:44:37.683727    1280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92ff064a-cbcf-4754-924c-2b7be0b8d914-xtables-lock\") pod \"kube-proxy-tp5db\" (UID: \"92ff064a-cbcf-4754-924c-2b7be0b8d914\") " pod="kube-system/kube-proxy-tp5db"
	Jan 10 08:44:37 pause-678123 kubelet[1280]: I0110 08:44:37.683776    1280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-857xp\" (UniqueName: \"kubernetes.io/projected/ed69837a-7d90-4463-b544-c354590fc785-kube-api-access-857xp\") pod \"kindnet-tpclh\" (UID: \"ed69837a-7d90-4463-b544-c354590fc785\") " pod="kube-system/kindnet-tpclh"
	Jan 10 08:44:38 pause-678123 kubelet[1280]: I0110 08:44:38.302763    1280 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-tp5db" podStartSLOduration=1.302729465 podStartE2EDuration="1.302729465s" podCreationTimestamp="2026-01-10 08:44:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 08:44:38.301780877 +0000 UTC m=+6.145475095" watchObservedRunningTime="2026-01-10 08:44:38.302729465 +0000 UTC m=+6.146423685"
	Jan 10 08:44:40 pause-678123 kubelet[1280]: I0110 08:44:40.305782    1280 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-tpclh" podStartSLOduration=1.19266468 podStartE2EDuration="3.30576562s" podCreationTimestamp="2026-01-10 08:44:37 +0000 UTC" firstStartedPulling="2026-01-10 08:44:37.958493893 +0000 UTC m=+5.802188106" lastFinishedPulling="2026-01-10 08:44:40.071594836 +0000 UTC m=+7.915289046" observedRunningTime="2026-01-10 08:44:40.305729709 +0000 UTC m=+8.149423918" watchObservedRunningTime="2026-01-10 08:44:40.30576562 +0000 UTC m=+8.149459849"
	Jan 10 08:44:40 pause-678123 kubelet[1280]: E0110 08:44:40.539193    1280 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-678123" containerName="kube-apiserver"
	Jan 10 08:44:41 pause-678123 kubelet[1280]: E0110 08:44:41.216558    1280 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-678123" containerName="kube-scheduler"
	Jan 10 08:44:41 pause-678123 kubelet[1280]: E0110 08:44:41.419438    1280 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-678123" containerName="etcd"
	Jan 10 08:44:45 pause-678123 kubelet[1280]: E0110 08:44:45.387142    1280 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-678123" containerName="kube-controller-manager"
	Jan 10 08:44:50 pause-678123 kubelet[1280]: E0110 08:44:50.545847    1280 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-678123" containerName="kube-apiserver"
	Jan 10 08:44:50 pause-678123 kubelet[1280]: I0110 08:44:50.933832    1280 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Jan 10 08:44:51 pause-678123 kubelet[1280]: I0110 08:44:51.084937    1280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79d1ba64-372e-4517-8266-1498a7d0ae38-config-volume\") pod \"coredns-7d764666f9-5f9zf\" (UID: \"79d1ba64-372e-4517-8266-1498a7d0ae38\") " pod="kube-system/coredns-7d764666f9-5f9zf"
	Jan 10 08:44:51 pause-678123 kubelet[1280]: I0110 08:44:51.084997    1280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8l9h\" (UniqueName: \"kubernetes.io/projected/79d1ba64-372e-4517-8266-1498a7d0ae38-kube-api-access-d8l9h\") pod \"coredns-7d764666f9-5f9zf\" (UID: \"79d1ba64-372e-4517-8266-1498a7d0ae38\") " pod="kube-system/coredns-7d764666f9-5f9zf"
	Jan 10 08:44:51 pause-678123 kubelet[1280]: E0110 08:44:51.221153    1280 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-678123" containerName="kube-scheduler"
	Jan 10 08:44:51 pause-678123 kubelet[1280]: E0110 08:44:51.420877    1280 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-678123" containerName="etcd"
	Jan 10 08:44:52 pause-678123 kubelet[1280]: E0110 08:44:52.322406    1280 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-5f9zf" containerName="coredns"
	Jan 10 08:44:52 pause-678123 kubelet[1280]: I0110 08:44:52.332955    1280 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-5f9zf" podStartSLOduration=15.332934291 podStartE2EDuration="15.332934291s" podCreationTimestamp="2026-01-10 08:44:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 08:44:52.332835426 +0000 UTC m=+20.176529644" watchObservedRunningTime="2026-01-10 08:44:52.332934291 +0000 UTC m=+20.176628507"
	Jan 10 08:44:53 pause-678123 kubelet[1280]: E0110 08:44:53.324157    1280 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-5f9zf" containerName="coredns"
	Jan 10 08:44:54 pause-678123 kubelet[1280]: E0110 08:44:54.328726    1280 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-5f9zf" containerName="coredns"
	Jan 10 08:44:57 pause-678123 kubelet[1280]: E0110 08:44:57.273788    1280 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized"
	Jan 10 08:45:00 pause-678123 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 08:45:00 pause-678123 kubelet[1280]: I0110 08:45:00.877932    1280 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Jan 10 08:45:00 pause-678123 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 08:45:00 pause-678123 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 08:45:00 pause-678123 systemd[1]: kubelet.service: Consumed 1.250s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-678123 -n pause-678123
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-678123 -n pause-678123: exit status 2 (339.645011ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-678123 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-678123
helpers_test.go:244: (dbg) docker inspect pause-678123:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9922b73863bb2644965cd2d7ef6181d70adcd4bed50556be8d16f27f9127811c",
	        "Created": "2026-01-10T08:44:11.26925534Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 165414,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T08:44:12.270281996Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e8ee619afcf8e9008c4fe3e7f2aba5fdbe7a9f0765053ca7dd53ab0df8ab02a5",
	        "ResolvConfPath": "/var/lib/docker/containers/9922b73863bb2644965cd2d7ef6181d70adcd4bed50556be8d16f27f9127811c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9922b73863bb2644965cd2d7ef6181d70adcd4bed50556be8d16f27f9127811c/hostname",
	        "HostsPath": "/var/lib/docker/containers/9922b73863bb2644965cd2d7ef6181d70adcd4bed50556be8d16f27f9127811c/hosts",
	        "LogPath": "/var/lib/docker/containers/9922b73863bb2644965cd2d7ef6181d70adcd4bed50556be8d16f27f9127811c/9922b73863bb2644965cd2d7ef6181d70adcd4bed50556be8d16f27f9127811c-json.log",
	        "Name": "/pause-678123",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-678123:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-678123",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9922b73863bb2644965cd2d7ef6181d70adcd4bed50556be8d16f27f9127811c",
	                "LowerDir": "/var/lib/docker/overlay2/158928b75aaae99bf4f98241df18202fbafee5b7159141ac27ee3f8dc5ab8003-init/diff:/var/lib/docker/overlay2/97b14eb520192356c991915ce74cd6094e6ef948f398ac688b667ea18491ccde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/158928b75aaae99bf4f98241df18202fbafee5b7159141ac27ee3f8dc5ab8003/merged",
	                "UpperDir": "/var/lib/docker/overlay2/158928b75aaae99bf4f98241df18202fbafee5b7159141ac27ee3f8dc5ab8003/diff",
	                "WorkDir": "/var/lib/docker/overlay2/158928b75aaae99bf4f98241df18202fbafee5b7159141ac27ee3f8dc5ab8003/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-678123",
	                "Source": "/var/lib/docker/volumes/pause-678123/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-678123",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-678123",
	                "name.minikube.sigs.k8s.io": "pause-678123",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "04bb3391602ae19527e87136593851b84ff9c027def654bd3422b22e5d94d675",
	            "SandboxKey": "/var/run/docker/netns/04bb3391602a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32963"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32964"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32967"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32965"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32966"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-678123": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "da73e98c01a21ed7c72204225a3074f49f65f3ce3cdc15a9a317580f4a9c6957",
	                    "EndpointID": "50251e56b1ad1ffdca35895803bc227179984d64c9c4fedd2f9a05386c6d3846",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "62:ac:49:97:30:08",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-678123",
	                        "9922b73863bb"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-678123 -n pause-678123
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-678123 -n pause-678123: exit status 2 (334.190912ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-678123 logs -n 25
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p scheduled-stop-701534 --memory=3072 --driver=docker  --container-runtime=crio                                                         │ scheduled-stop-701534       │ jenkins │ v1.37.0 │ 10 Jan 26 08:42 UTC │ 10 Jan 26 08:42 UTC │
	│ stop    │ -p scheduled-stop-701534 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-701534       │ jenkins │ v1.37.0 │ 10 Jan 26 08:42 UTC │                     │
	│ stop    │ -p scheduled-stop-701534 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-701534       │ jenkins │ v1.37.0 │ 10 Jan 26 08:42 UTC │                     │
	│ stop    │ -p scheduled-stop-701534 --schedule 5m -v=5 --alsologtostderr                                                                            │ scheduled-stop-701534       │ jenkins │ v1.37.0 │ 10 Jan 26 08:42 UTC │                     │
	│ stop    │ -p scheduled-stop-701534 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-701534       │ jenkins │ v1.37.0 │ 10 Jan 26 08:42 UTC │                     │
	│ stop    │ -p scheduled-stop-701534 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-701534       │ jenkins │ v1.37.0 │ 10 Jan 26 08:42 UTC │                     │
	│ stop    │ -p scheduled-stop-701534 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-701534       │ jenkins │ v1.37.0 │ 10 Jan 26 08:42 UTC │                     │
	│ stop    │ -p scheduled-stop-701534 --cancel-scheduled                                                                                              │ scheduled-stop-701534       │ jenkins │ v1.37.0 │ 10 Jan 26 08:42 UTC │ 10 Jan 26 08:42 UTC │
	│ stop    │ -p scheduled-stop-701534 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-701534       │ jenkins │ v1.37.0 │ 10 Jan 26 08:42 UTC │                     │
	│ stop    │ -p scheduled-stop-701534 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-701534       │ jenkins │ v1.37.0 │ 10 Jan 26 08:42 UTC │                     │
	│ stop    │ -p scheduled-stop-701534 --schedule 15s -v=5 --alsologtostderr                                                                           │ scheduled-stop-701534       │ jenkins │ v1.37.0 │ 10 Jan 26 08:42 UTC │ 10 Jan 26 08:43 UTC │
	│ delete  │ -p scheduled-stop-701534                                                                                                                 │ scheduled-stop-701534       │ jenkins │ v1.37.0 │ 10 Jan 26 08:43 UTC │ 10 Jan 26 08:43 UTC │
	│ start   │ -p insufficient-storage-221766 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                         │ insufficient-storage-221766 │ jenkins │ v1.37.0 │ 10 Jan 26 08:43 UTC │                     │
	│ delete  │ -p insufficient-storage-221766                                                                                                           │ insufficient-storage-221766 │ jenkins │ v1.37.0 │ 10 Jan 26 08:43 UTC │ 10 Jan 26 08:43 UTC │
	│ start   │ -p offline-crio-669446 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                        │ offline-crio-669446         │ jenkins │ v1.37.0 │ 10 Jan 26 08:43 UTC │ 10 Jan 26 08:44 UTC │
	│ start   │ -p pause-678123 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-678123                │ jenkins │ v1.37.0 │ 10 Jan 26 08:43 UTC │ 10 Jan 26 08:44 UTC │
	│ start   │ -p stopped-upgrade-761816 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ stopped-upgrade-761816      │ jenkins │ v1.35.0 │ 10 Jan 26 08:43 UTC │ 10 Jan 26 08:44 UTC │
	│ start   │ -p missing-upgrade-854643 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-854643      │ jenkins │ v1.35.0 │ 10 Jan 26 08:43 UTC │ 10 Jan 26 08:44 UTC │
	│ stop    │ stopped-upgrade-761816 stop                                                                                                              │ stopped-upgrade-761816      │ jenkins │ v1.35.0 │ 10 Jan 26 08:44 UTC │ 10 Jan 26 08:44 UTC │
	│ start   │ -p missing-upgrade-854643 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-854643      │ jenkins │ v1.37.0 │ 10 Jan 26 08:44 UTC │                     │
	│ start   │ -p stopped-upgrade-761816 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-761816      │ jenkins │ v1.37.0 │ 10 Jan 26 08:44 UTC │                     │
	│ delete  │ -p offline-crio-669446                                                                                                                   │ offline-crio-669446         │ jenkins │ v1.37.0 │ 10 Jan 26 08:44 UTC │ 10 Jan 26 08:44 UTC │
	│ start   │ -p pause-678123 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-678123                │ jenkins │ v1.37.0 │ 10 Jan 26 08:44 UTC │ 10 Jan 26 08:45 UTC │
	│ start   │ -p kubernetes-upgrade-182534 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-182534   │ jenkins │ v1.37.0 │ 10 Jan 26 08:44 UTC │                     │
	│ pause   │ -p pause-678123 --alsologtostderr -v=5                                                                                                   │ pause-678123                │ jenkins │ v1.37.0 │ 10 Jan 26 08:45 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 08:44:56
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 08:44:56.336456  178486 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:44:56.336698  178486 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:44:56.336707  178486 out.go:374] Setting ErrFile to fd 2...
	I0110 08:44:56.336711  178486 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:44:56.336918  178486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:44:56.337384  178486 out.go:368] Setting JSON to false
	I0110 08:44:56.338345  178486 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1648,"bootTime":1768033048,"procs":276,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 08:44:56.338395  178486 start.go:143] virtualization: kvm guest
	I0110 08:44:56.340456  178486 out.go:179] * [kubernetes-upgrade-182534] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 08:44:56.341921  178486 notify.go:221] Checking for updates...
	I0110 08:44:56.341946  178486 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 08:44:56.343407  178486 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 08:44:56.344834  178486 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:44:56.346054  178486 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-3641/.minikube
	I0110 08:44:56.347295  178486 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 08:44:56.348795  178486 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 08:44:56.350586  178486 config.go:182] Loaded profile config "missing-upgrade-854643": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0110 08:44:56.350762  178486 config.go:182] Loaded profile config "pause-678123": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:44:56.350865  178486 config.go:182] Loaded profile config "stopped-upgrade-761816": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0110 08:44:56.350972  178486 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 08:44:56.376670  178486 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 08:44:56.376997  178486 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:44:56.445954  178486 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:67 SystemTime:2026-01-10 08:44:56.435880465 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:44:56.446057  178486 docker.go:319] overlay module found
	I0110 08:44:56.447805  178486 out.go:179] * Using the docker driver based on user configuration
	I0110 08:44:56.449046  178486 start.go:309] selected driver: docker
	I0110 08:44:56.449065  178486 start.go:928] validating driver "docker" against <nil>
	I0110 08:44:56.449091  178486 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 08:44:56.449720  178486 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:44:56.506792  178486 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:67 SystemTime:2026-01-10 08:44:56.496994437 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:44:56.506969  178486 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 08:44:56.507203  178486 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0110 08:44:56.508809  178486 out.go:179] * Using Docker driver with root privileges
	I0110 08:44:56.510270  178486 cni.go:84] Creating CNI manager for ""
	I0110 08:44:56.510342  178486 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 08:44:56.510358  178486 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 08:44:56.510518  178486 start.go:353] cluster config:
	{Name:kubernetes-upgrade-182534 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-182534 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:44:56.511904  178486 out.go:179] * Starting "kubernetes-upgrade-182534" primary control-plane node in "kubernetes-upgrade-182534" cluster
	I0110 08:44:56.513169  178486 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 08:44:56.514578  178486 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 08:44:56.515750  178486 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0110 08:44:56.515785  178486 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I0110 08:44:56.515792  178486 cache.go:65] Caching tarball of preloaded images
	I0110 08:44:56.515854  178486 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 08:44:56.515906  178486 preload.go:251] Found /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0110 08:44:56.515922  178486 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I0110 08:44:56.516019  178486 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/kubernetes-upgrade-182534/config.json ...
	I0110 08:44:56.516042  178486 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/kubernetes-upgrade-182534/config.json: {Name:mk0fec41d6a9f462e0159b34bc1c15ae991f15c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:44:56.537585  178486 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 08:44:56.537601  178486 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 08:44:56.537616  178486 cache.go:243] Successfully downloaded all kic artifacts
	I0110 08:44:56.537647  178486 start.go:360] acquireMachinesLock for kubernetes-upgrade-182534: {Name:mk14b488606d391bb7eb08340862fd84e2a57eba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 08:44:56.537766  178486 start.go:364] duration metric: took 100.476µs to acquireMachinesLock for "kubernetes-upgrade-182534"
	I0110 08:44:56.537797  178486 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-182534 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-182534 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 08:44:56.537861  178486 start.go:125] createHost starting for "" (driver="docker")
	I0110 08:44:55.826854  175395 cli_runner.go:164] Run: docker container inspect missing-upgrade-854643 --format={{.State.Status}}
	W0110 08:44:55.845955  175395 cli_runner.go:211] docker container inspect missing-upgrade-854643 --format={{.State.Status}} returned with exit code 1
	I0110 08:44:55.846023  175395 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-854643": docker container inspect missing-upgrade-854643 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-854643
	I0110 08:44:55.846041  175395 oci.go:673] temporary error: container missing-upgrade-854643 status is  but expect it to be exited
	I0110 08:44:55.846074  175395 retry.go:84] will retry after 5.6s: couldn't verify container is exited. %v: unknown state "missing-upgrade-854643": docker container inspect missing-upgrade-854643 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-854643
	I0110 08:44:56.135810  175623 api_server.go:315] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0110 08:44:56.135866  175623 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0110 08:44:54.476543  177687 out.go:252] * Updating the running docker "pause-678123" container ...
	I0110 08:44:54.476578  177687 machine.go:94] provisionDockerMachine start ...
	I0110 08:44:54.476661  177687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-678123
	I0110 08:44:54.494226  177687 main.go:144] libmachine: Using SSH client type: native
	I0110 08:44:54.494465  177687 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 32963 <nil> <nil>}
	I0110 08:44:54.494477  177687 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 08:44:54.619276  177687 main.go:144] libmachine: SSH cmd err, output: <nil>: pause-678123
	
	I0110 08:44:54.619308  177687 ubuntu.go:182] provisioning hostname "pause-678123"
	I0110 08:44:54.619374  177687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-678123
	I0110 08:44:54.639106  177687 main.go:144] libmachine: Using SSH client type: native
	I0110 08:44:54.639324  177687 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 32963 <nil> <nil>}
	I0110 08:44:54.639337  177687 main.go:144] libmachine: About to run SSH command:
	sudo hostname pause-678123 && echo "pause-678123" | sudo tee /etc/hostname
	I0110 08:44:54.774787  177687 main.go:144] libmachine: SSH cmd err, output: <nil>: pause-678123
	
	I0110 08:44:54.774858  177687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-678123
	I0110 08:44:54.793013  177687 main.go:144] libmachine: Using SSH client type: native
	I0110 08:44:54.793228  177687 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 32963 <nil> <nil>}
	I0110 08:44:54.793244  177687 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-678123' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-678123/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-678123' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 08:44:54.920003  177687 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 08:44:54.920034  177687 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-3641/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-3641/.minikube}
	I0110 08:44:54.920141  177687 ubuntu.go:190] setting up certificates
	I0110 08:44:54.920162  177687 provision.go:84] configureAuth start
	I0110 08:44:54.920210  177687 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-678123
	I0110 08:44:54.938232  177687 provision.go:143] copyHostCerts
	I0110 08:44:54.938292  177687 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-3641/.minikube/key.pem, removing ...
	I0110 08:44:54.938308  177687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-3641/.minikube/key.pem
	I0110 08:44:54.938379  177687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-3641/.minikube/key.pem (1675 bytes)
	I0110 08:44:54.938476  177687 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-3641/.minikube/ca.pem, removing ...
	I0110 08:44:54.938485  177687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-3641/.minikube/ca.pem
	I0110 08:44:54.938513  177687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-3641/.minikube/ca.pem (1078 bytes)
	I0110 08:44:54.938587  177687 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-3641/.minikube/cert.pem, removing ...
	I0110 08:44:54.938595  177687 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-3641/.minikube/cert.pem
	I0110 08:44:54.938618  177687 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-3641/.minikube/cert.pem (1123 bytes)
	I0110 08:44:54.938674  177687 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-3641/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca-key.pem org=jenkins.pause-678123 san=[127.0.0.1 192.168.76.2 localhost minikube pause-678123]
	I0110 08:44:55.164707  177687 provision.go:177] copyRemoteCerts
	I0110 08:44:55.164775  177687 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 08:44:55.164824  177687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-678123
	I0110 08:44:55.183899  177687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32963 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/pause-678123/id_rsa Username:docker}
	I0110 08:44:55.276617  177687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0110 08:44:55.294870  177687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 08:44:55.312378  177687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0110 08:44:55.331291  177687 provision.go:87] duration metric: took 411.10883ms to configureAuth
	I0110 08:44:55.331324  177687 ubuntu.go:206] setting minikube options for container-runtime
	I0110 08:44:55.331578  177687 config.go:182] Loaded profile config "pause-678123": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:44:55.331673  177687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-678123
	I0110 08:44:55.350300  177687 main.go:144] libmachine: Using SSH client type: native
	I0110 08:44:55.350509  177687 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 32963 <nil> <nil>}
	I0110 08:44:55.350531  177687 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 08:44:55.669432  177687 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 08:44:55.669476  177687 machine.go:97] duration metric: took 1.192867179s to provisionDockerMachine
	I0110 08:44:55.669491  177687 start.go:293] postStartSetup for "pause-678123" (driver="docker")
	I0110 08:44:55.669504  177687 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 08:44:55.669564  177687 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 08:44:55.669609  177687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-678123
	I0110 08:44:55.688376  177687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32963 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/pause-678123/id_rsa Username:docker}
	I0110 08:44:55.797471  177687 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 08:44:55.801121  177687 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 08:44:55.801144  177687 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 08:44:55.801155  177687 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-3641/.minikube/addons for local assets ...
	I0110 08:44:55.801218  177687 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-3641/.minikube/files for local assets ...
	I0110 08:44:55.801311  177687 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-3641/.minikube/files/etc/ssl/certs/71832.pem -> 71832.pem in /etc/ssl/certs
	I0110 08:44:55.801428  177687 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 08:44:55.809197  177687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/files/etc/ssl/certs/71832.pem --> /etc/ssl/certs/71832.pem (1708 bytes)
	I0110 08:44:55.826784  177687 start.go:296] duration metric: took 157.278957ms for postStartSetup
	I0110 08:44:55.826862  177687 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 08:44:55.826936  177687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-678123
	I0110 08:44:55.846426  177687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32963 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/pause-678123/id_rsa Username:docker}
	I0110 08:44:55.937306  177687 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 08:44:55.942141  177687 fix.go:56] duration metric: took 1.485814638s for fixHost
	I0110 08:44:55.942167  177687 start.go:83] releasing machines lock for "pause-678123", held for 1.485859602s
	I0110 08:44:55.942242  177687 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-678123
	I0110 08:44:55.960706  177687 ssh_runner.go:195] Run: cat /version.json
	I0110 08:44:55.960768  177687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-678123
	I0110 08:44:55.960823  177687 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 08:44:55.960913  177687 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-678123
	I0110 08:44:55.981000  177687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32963 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/pause-678123/id_rsa Username:docker}
	I0110 08:44:55.981239  177687 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32963 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/pause-678123/id_rsa Username:docker}
	I0110 08:44:56.071043  177687 ssh_runner.go:195] Run: systemctl --version
	I0110 08:44:56.133749  177687 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 08:44:56.171662  177687 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 08:44:56.176441  177687 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 08:44:56.176519  177687 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 08:44:56.184231  177687 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 08:44:56.184252  177687 start.go:496] detecting cgroup driver to use...
	I0110 08:44:56.184291  177687 detect.go:178] detected "systemd" cgroup driver on host os
	I0110 08:44:56.184337  177687 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 08:44:56.199149  177687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 08:44:56.212304  177687 docker.go:218] disabling cri-docker service (if available) ...
	I0110 08:44:56.212357  177687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 08:44:56.227425  177687 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 08:44:56.241865  177687 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 08:44:56.370370  177687 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 08:44:56.495973  177687 docker.go:234] disabling docker service ...
	I0110 08:44:56.496049  177687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 08:44:56.513010  177687 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 08:44:56.525646  177687 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 08:44:56.642236  177687 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 08:44:56.760125  177687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 08:44:56.773423  177687 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 08:44:56.788361  177687 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 08:44:56.788525  177687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:44:56.798187  177687 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0110 08:44:56.798248  177687 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:44:56.807354  177687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:44:56.818101  177687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:44:56.827368  177687 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 08:44:56.835999  177687 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:44:56.847456  177687 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:44:56.856521  177687 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:44:56.865884  177687 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 08:44:56.873842  177687 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 08:44:56.882272  177687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:44:56.996326  177687 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 08:44:57.191630  177687 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 08:44:57.191694  177687 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 08:44:57.195796  177687 start.go:574] Will wait 60s for crictl version
	I0110 08:44:57.195855  177687 ssh_runner.go:195] Run: which crictl
	I0110 08:44:57.199952  177687 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 08:44:57.231214  177687 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 08:44:57.231324  177687 ssh_runner.go:195] Run: crio --version
	I0110 08:44:57.261387  177687 ssh_runner.go:195] Run: crio --version
	I0110 08:44:57.297887  177687 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 08:44:57.299070  177687 cli_runner.go:164] Run: docker network inspect pause-678123 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 08:44:57.317503  177687 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0110 08:44:57.322271  177687 kubeadm.go:884] updating cluster {Name:pause-678123 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-678123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 08:44:57.322451  177687 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 08:44:57.322507  177687 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 08:44:57.360242  177687 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 08:44:57.360262  177687 crio.go:433] Images already preloaded, skipping extraction
	I0110 08:44:57.360304  177687 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 08:44:57.389169  177687 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 08:44:57.389201  177687 cache_images.go:86] Images are preloaded, skipping loading
	I0110 08:44:57.389212  177687 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0 crio true true} ...
	I0110 08:44:57.389348  177687 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-678123 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:pause-678123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 08:44:57.389439  177687 ssh_runner.go:195] Run: crio config
	I0110 08:44:57.441340  177687 cni.go:84] Creating CNI manager for ""
	I0110 08:44:57.441358  177687 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 08:44:57.441371  177687 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 08:44:57.441391  177687 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-678123 NodeName:pause-678123 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 08:44:57.441492  177687 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-678123"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 08:44:57.441545  177687 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 08:44:57.449767  177687 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 08:44:57.449829  177687 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 08:44:57.458090  177687 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I0110 08:44:57.470611  177687 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 08:44:57.483532  177687 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I0110 08:44:57.496458  177687 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0110 08:44:57.500670  177687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:44:57.617473  177687 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 08:44:57.631885  177687 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/pause-678123 for IP: 192.168.76.2
	I0110 08:44:57.631908  177687 certs.go:195] generating shared ca certs ...
	I0110 08:44:57.631922  177687 certs.go:227] acquiring lock for ca certs: {Name:mk00e261408d0e9fd9be39128613c5110a764de2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:44:57.632049  177687 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-3641/.minikube/ca.key
	I0110 08:44:57.632087  177687 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-3641/.minikube/proxy-client-ca.key
	I0110 08:44:57.632096  177687 certs.go:257] generating profile certs ...
	I0110 08:44:57.632172  177687 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/pause-678123/client.key
	I0110 08:44:57.632228  177687 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/pause-678123/apiserver.key.a35dd3da
	I0110 08:44:57.632262  177687 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/pause-678123/proxy-client.key
	I0110 08:44:57.632355  177687 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/7183.pem (1338 bytes)
	W0110 08:44:57.632385  177687 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-3641/.minikube/certs/7183_empty.pem, impossibly tiny 0 bytes
	I0110 08:44:57.632394  177687 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca-key.pem (1675 bytes)
	I0110 08:44:57.632418  177687 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem (1078 bytes)
	I0110 08:44:57.632442  177687 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/cert.pem (1123 bytes)
	I0110 08:44:57.632465  177687 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/key.pem (1675 bytes)
	I0110 08:44:57.632510  177687 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/files/etc/ssl/certs/71832.pem (1708 bytes)
	I0110 08:44:57.633039  177687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 08:44:57.652375  177687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0110 08:44:57.670367  177687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 08:44:57.687338  177687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 08:44:57.705152  177687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/pause-678123/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0110 08:44:57.724073  177687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/pause-678123/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0110 08:44:57.742393  177687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/pause-678123/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 08:44:57.761146  177687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/pause-678123/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 08:44:57.778619  177687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/certs/7183.pem --> /usr/share/ca-certificates/7183.pem (1338 bytes)
	I0110 08:44:57.796126  177687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/files/etc/ssl/certs/71832.pem --> /usr/share/ca-certificates/71832.pem (1708 bytes)
	I0110 08:44:57.813792  177687 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 08:44:57.830938  177687 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 08:44:57.843887  177687 ssh_runner.go:195] Run: openssl version
	I0110 08:44:57.850264  177687 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:44:57.858451  177687 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 08:44:57.865955  177687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:44:57.869804  177687 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:44:57.869860  177687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:44:57.908197  177687 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 08:44:57.916327  177687 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7183.pem
	I0110 08:44:57.924010  177687 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7183.pem /etc/ssl/certs/7183.pem
	I0110 08:44:57.932002  177687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7183.pem
	I0110 08:44:57.935627  177687 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 08:23 /usr/share/ca-certificates/7183.pem
	I0110 08:44:57.935677  177687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7183.pem
	I0110 08:44:57.970001  177687 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 08:44:57.978291  177687 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/71832.pem
	I0110 08:44:57.986114  177687 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/71832.pem /etc/ssl/certs/71832.pem
	I0110 08:44:57.993915  177687 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71832.pem
	I0110 08:44:57.997885  177687 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 08:23 /usr/share/ca-certificates/71832.pem
	I0110 08:44:57.997959  177687 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71832.pem
	I0110 08:44:58.032265  177687 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 08:44:58.039944  177687 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 08:44:58.043855  177687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 08:44:58.078414  177687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 08:44:58.113313  177687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 08:44:58.147640  177687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 08:44:58.181616  177687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 08:44:58.215668  177687 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 08:44:58.250692  177687 kubeadm.go:401] StartCluster: {Name:pause-678123 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:pause-678123 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:44:58.250851  177687 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 08:44:58.250944  177687 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 08:44:58.282709  177687 cri.go:96] found id: "f3ea8d5598a5ea9d8e3a390c18f790444ae5cb11b7c0226f0b144b9d05c83a04"
	I0110 08:44:58.282743  177687 cri.go:96] found id: "5caf2e27402c6bf30d53a931012ff5849fde4125867d91d0913fce93b68c0021"
	I0110 08:44:58.282750  177687 cri.go:96] found id: "0ec28ff9a5a230ef38b5b4a4e2fb64fdcf59fa83b33592705b9c7c3586711f1c"
	I0110 08:44:58.282756  177687 cri.go:96] found id: "1206eca28b9b971a40e042d2cbfbee5210ee9fec259b792781c02e55620e92db"
	I0110 08:44:58.282761  177687 cri.go:96] found id: "855477d18a3a24c8ce5384ee94e9fbbf34b25e8c7221d8828b4fbb3bbb98e8b5"
	I0110 08:44:58.282766  177687 cri.go:96] found id: "2b24ef461e6b2db4c3746849bf15b7a5bb7ede8d1ad7b23c1d938ec8e945e86d"
	I0110 08:44:58.282770  177687 cri.go:96] found id: "f36022cb1f6ebf0fb589be28e6c4a599c94aa3ed350eedaa9ae426ea44cf5016"
	I0110 08:44:58.282774  177687 cri.go:96] found id: ""
	I0110 08:44:58.282816  177687 ssh_runner.go:195] Run: sudo runc list -f json
	W0110 08:44:58.296832  177687 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:44:58Z" level=error msg="open /run/runc: no such file or directory"
	I0110 08:44:58.296911  177687 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 08:44:58.305178  177687 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 08:44:58.305199  177687 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 08:44:58.305250  177687 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 08:44:58.312660  177687 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 08:44:58.313627  177687 kubeconfig.go:125] found "pause-678123" server: "https://192.168.76.2:8443"
	I0110 08:44:58.314996  177687 kapi.go:59] client config for pause-678123: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22427-3641/.minikube/profiles/pause-678123/client.crt", KeyFile:"/home/jenkins/minikube-integration/22427-3641/.minikube/profiles/pause-678123/client.key", CAFile:"/home/jenkins/minikube-integration/22427-3641/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f75c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0110 08:44:58.315543  177687 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0110 08:44:58.315565  177687 envvar.go:172] "Feature gate default state" feature="InOrderInformersBatchProcess" enabled=true
	I0110 08:44:58.315573  177687 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=true
	I0110 08:44:58.315579  177687 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=true
	I0110 08:44:58.315590  177687 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0110 08:44:58.315602  177687 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0110 08:44:58.316078  177687 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 08:44:58.323825  177687 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I0110 08:44:58.323860  177687 kubeadm.go:602] duration metric: took 18.654348ms to restartPrimaryControlPlane
	I0110 08:44:58.323870  177687 kubeadm.go:403] duration metric: took 73.191474ms to StartCluster
	I0110 08:44:58.323887  177687 settings.go:142] acquiring lock: {Name:mkbb32fc6441ceb31ce2923ea8999f8375298f08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:44:58.323958  177687 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:44:58.324944  177687 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/kubeconfig: {Name:mk0d29b3b0ee1fd71729aff31f901be50a2d0664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:44:58.325155  177687 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 08:44:58.325278  177687 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 08:44:58.325374  177687 config.go:182] Loaded profile config "pause-678123": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:44:58.331942  177687 out.go:179] * Verifying Kubernetes components...
	I0110 08:44:58.331949  177687 out.go:179] * Enabled addons: 
	I0110 08:44:58.333451  177687 addons.go:530] duration metric: took 8.183817ms for enable addons: enabled=[]
	I0110 08:44:58.333509  177687 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:44:58.449491  177687 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 08:44:58.462451  177687 node_ready.go:35] waiting up to 6m0s for node "pause-678123" to be "Ready" ...
	I0110 08:44:58.470107  177687 node_ready.go:49] node "pause-678123" is "Ready"
	I0110 08:44:58.470129  177687 node_ready.go:38] duration metric: took 7.643111ms for node "pause-678123" to be "Ready" ...
	I0110 08:44:58.470141  177687 api_server.go:52] waiting for apiserver process to appear ...
	I0110 08:44:58.470189  177687 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 08:44:58.481319  177687 api_server.go:72] duration metric: took 156.137719ms to wait for apiserver process to appear ...
	I0110 08:44:58.481340  177687 api_server.go:88] waiting for apiserver healthz status ...
	I0110 08:44:58.481363  177687 api_server.go:299] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0110 08:44:58.485985  177687 api_server.go:325] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0110 08:44:58.487163  177687 api_server.go:141] control plane version: v1.35.0
	I0110 08:44:58.487184  177687 api_server.go:131] duration metric: took 5.837916ms to wait for apiserver health ...
	I0110 08:44:58.487192  177687 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 08:44:58.490151  177687 system_pods.go:59] 7 kube-system pods found
	I0110 08:44:58.490173  177687 system_pods.go:61] "coredns-7d764666f9-5f9zf" [79d1ba64-372e-4517-8266-1498a7d0ae38] Running
	I0110 08:44:58.490178  177687 system_pods.go:61] "etcd-pause-678123" [58afbb3d-a513-41f1-a417-397c84c5e698] Running
	I0110 08:44:58.490182  177687 system_pods.go:61] "kindnet-tpclh" [ed69837a-7d90-4463-b544-c354590fc785] Running
	I0110 08:44:58.490186  177687 system_pods.go:61] "kube-apiserver-pause-678123" [d2ef5672-6a05-41ff-9ca0-b5248d7eb1b2] Running
	I0110 08:44:58.490189  177687 system_pods.go:61] "kube-controller-manager-pause-678123" [90a8e78c-ab51-47a5-9701-096c210da6ac] Running
	I0110 08:44:58.490194  177687 system_pods.go:61] "kube-proxy-tp5db" [92ff064a-cbcf-4754-924c-2b7be0b8d914] Running
	I0110 08:44:58.490198  177687 system_pods.go:61] "kube-scheduler-pause-678123" [2e424799-163f-4021-b178-857562dfce89] Running
	I0110 08:44:58.490202  177687 system_pods.go:74] duration metric: took 3.005359ms to wait for pod list to return data ...
	I0110 08:44:58.490212  177687 default_sa.go:34] waiting for default service account to be created ...
	I0110 08:44:58.492037  177687 default_sa.go:45] found service account: "default"
	I0110 08:44:58.492057  177687 default_sa.go:55] duration metric: took 1.839726ms for default service account to be created ...
	I0110 08:44:58.492065  177687 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 08:44:58.494882  177687 system_pods.go:86] 7 kube-system pods found
	I0110 08:44:58.494905  177687 system_pods.go:89] "coredns-7d764666f9-5f9zf" [79d1ba64-372e-4517-8266-1498a7d0ae38] Running
	I0110 08:44:58.494910  177687 system_pods.go:89] "etcd-pause-678123" [58afbb3d-a513-41f1-a417-397c84c5e698] Running
	I0110 08:44:58.494913  177687 system_pods.go:89] "kindnet-tpclh" [ed69837a-7d90-4463-b544-c354590fc785] Running
	I0110 08:44:58.494916  177687 system_pods.go:89] "kube-apiserver-pause-678123" [d2ef5672-6a05-41ff-9ca0-b5248d7eb1b2] Running
	I0110 08:44:58.494921  177687 system_pods.go:89] "kube-controller-manager-pause-678123" [90a8e78c-ab51-47a5-9701-096c210da6ac] Running
	I0110 08:44:58.494925  177687 system_pods.go:89] "kube-proxy-tp5db" [92ff064a-cbcf-4754-924c-2b7be0b8d914] Running
	I0110 08:44:58.494928  177687 system_pods.go:89] "kube-scheduler-pause-678123" [2e424799-163f-4021-b178-857562dfce89] Running
	I0110 08:44:58.494933  177687 system_pods.go:126] duration metric: took 2.864173ms to wait for k8s-apps to be running ...
	I0110 08:44:58.494942  177687 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 08:44:58.494981  177687 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:44:58.508578  177687 system_svc.go:56] duration metric: took 13.624562ms WaitForService to wait for kubelet
	I0110 08:44:58.508609  177687 kubeadm.go:587] duration metric: took 183.430061ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 08:44:58.508624  177687 node_conditions.go:102] verifying NodePressure condition ...
	I0110 08:44:58.511443  177687 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0110 08:44:58.511471  177687 node_conditions.go:123] node cpu capacity is 8
	I0110 08:44:58.511487  177687 node_conditions.go:105] duration metric: took 2.858938ms to run NodePressure ...
	I0110 08:44:58.511502  177687 start.go:242] waiting for startup goroutines ...
	I0110 08:44:58.511514  177687 start.go:247] waiting for cluster config update ...
	I0110 08:44:58.511525  177687 start.go:256] writing updated cluster config ...
	I0110 08:44:58.511885  177687 ssh_runner.go:195] Run: rm -f paused
	I0110 08:44:58.515793  177687 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 08:44:58.516371  177687 kapi.go:59] client config for pause-678123: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22427-3641/.minikube/profiles/pause-678123/client.crt", KeyFile:"/home/jenkins/minikube-integration/22427-3641/.minikube/profiles/pause-678123/client.key", CAFile:"/home/jenkins/minikube-integration/22427-3641/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f75c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0110 08:44:58.519129  177687 pod_ready.go:83] waiting for pod "coredns-7d764666f9-5f9zf" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:44:58.523005  177687 pod_ready.go:94] pod "coredns-7d764666f9-5f9zf" is "Ready"
	I0110 08:44:58.523028  177687 pod_ready.go:86] duration metric: took 3.864853ms for pod "coredns-7d764666f9-5f9zf" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:44:58.524788  177687 pod_ready.go:83] waiting for pod "etcd-pause-678123" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:44:58.528515  177687 pod_ready.go:94] pod "etcd-pause-678123" is "Ready"
	I0110 08:44:58.528535  177687 pod_ready.go:86] duration metric: took 3.731327ms for pod "etcd-pause-678123" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:44:58.530147  177687 pod_ready.go:83] waiting for pod "kube-apiserver-pause-678123" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:44:58.533788  177687 pod_ready.go:94] pod "kube-apiserver-pause-678123" is "Ready"
	I0110 08:44:58.533813  177687 pod_ready.go:86] duration metric: took 3.645501ms for pod "kube-apiserver-pause-678123" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:44:58.535645  177687 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-678123" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:44:58.919866  177687 pod_ready.go:94] pod "kube-controller-manager-pause-678123" is "Ready"
	I0110 08:44:58.919903  177687 pod_ready.go:86] duration metric: took 384.240295ms for pod "kube-controller-manager-pause-678123" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:44:59.119780  177687 pod_ready.go:83] waiting for pod "kube-proxy-tp5db" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:44:59.520117  177687 pod_ready.go:94] pod "kube-proxy-tp5db" is "Ready"
	I0110 08:44:59.520141  177687 pod_ready.go:86] duration metric: took 400.334702ms for pod "kube-proxy-tp5db" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:44:59.720311  177687 pod_ready.go:83] waiting for pod "kube-scheduler-pause-678123" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:45:00.120564  177687 pod_ready.go:94] pod "kube-scheduler-pause-678123" is "Ready"
	I0110 08:45:00.120590  177687 pod_ready.go:86] duration metric: took 400.251897ms for pod "kube-scheduler-pause-678123" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:45:00.120602  177687 pod_ready.go:40] duration metric: took 1.604783538s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 08:45:00.163434  177687 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0110 08:45:00.324859  177687 out.go:179] * Done! kubectl is now configured to use "pause-678123" cluster and "default" namespace by default
	I0110 08:44:56.539805  178486 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 08:44:56.540016  178486 start.go:159] libmachine.API.Create for "kubernetes-upgrade-182534" (driver="docker")
	I0110 08:44:56.540043  178486 client.go:173] LocalClient.Create starting
	I0110 08:44:56.540100  178486 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem
	I0110 08:44:56.540129  178486 main.go:144] libmachine: Decoding PEM data...
	I0110 08:44:56.540147  178486 main.go:144] libmachine: Parsing certificate...
	I0110 08:44:56.540207  178486 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-3641/.minikube/certs/cert.pem
	I0110 08:44:56.540226  178486 main.go:144] libmachine: Decoding PEM data...
	I0110 08:44:56.540238  178486 main.go:144] libmachine: Parsing certificate...
	I0110 08:44:56.540526  178486 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-182534 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 08:44:56.560611  178486 cli_runner.go:211] docker network inspect kubernetes-upgrade-182534 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 08:44:56.560710  178486 network_create.go:284] running [docker network inspect kubernetes-upgrade-182534] to gather additional debugging logs...
	I0110 08:44:56.560730  178486 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-182534
	W0110 08:44:56.580481  178486 cli_runner.go:211] docker network inspect kubernetes-upgrade-182534 returned with exit code 1
	I0110 08:44:56.580511  178486 network_create.go:287] error running [docker network inspect kubernetes-upgrade-182534]: docker network inspect kubernetes-upgrade-182534: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-182534 not found
	I0110 08:44:56.580529  178486 network_create.go:289] output of [docker network inspect kubernetes-upgrade-182534]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-182534 not found
	
	** /stderr **
	I0110 08:44:56.580647  178486 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 08:44:56.597864  178486 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9da35691088c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:0a:0c:fc:dc:fc:2f} reservation:<nil>}
	I0110 08:44:56.598441  178486 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5ce9d5913249 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:11:5d:21:c0:0b} reservation:<nil>}
	I0110 08:44:56.598974  178486 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-73a46a53fce2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:be:e8:cf:3a:03:99} reservation:<nil>}
	I0110 08:44:56.599497  178486 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-da73e98c01a2 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ee:ca:e6:c3:18:ee} reservation:<nil>}
	I0110 08:44:56.600248  178486 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ea5fb0}
	I0110 08:44:56.600286  178486 network_create.go:124] attempt to create docker network kubernetes-upgrade-182534 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0110 08:44:56.600340  178486 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-182534 kubernetes-upgrade-182534
	I0110 08:44:56.647640  178486 network_create.go:108] docker network kubernetes-upgrade-182534 192.168.85.0/24 created
	I0110 08:44:56.647669  178486 kic.go:121] calculated static IP "192.168.85.2" for the "kubernetes-upgrade-182534" container
	I0110 08:44:56.647726  178486 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 08:44:56.665000  178486 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-182534 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-182534 --label created_by.minikube.sigs.k8s.io=true
	I0110 08:44:56.689297  178486 oci.go:103] Successfully created a docker volume kubernetes-upgrade-182534
	I0110 08:44:56.689361  178486 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-182534-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-182534 --entrypoint /usr/bin/test -v kubernetes-upgrade-182534:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 08:44:57.088491  178486 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-182534
	I0110 08:44:57.088569  178486 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0110 08:44:57.088583  178486 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 08:44:57.088660  178486 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-182534:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 08:45:01.427365  175395 cli_runner.go:164] Run: docker container inspect missing-upgrade-854643 --format={{.State.Status}}
	W0110 08:45:01.446038  175395 cli_runner.go:211] docker container inspect missing-upgrade-854643 --format={{.State.Status}} returned with exit code 1
	I0110 08:45:01.446129  175395 oci.go:671] temporary error verifying shutdown: unknown state "missing-upgrade-854643": docker container inspect missing-upgrade-854643 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-854643
	I0110 08:45:01.446153  175395 oci.go:673] temporary error: container missing-upgrade-854643 status is  but expect it to be exited
	I0110 08:45:01.446193  175395 oci.go:88] couldn't shut down missing-upgrade-854643 (might be okay): verify shutdown: couldn't verify container is exited. %v: unknown state "missing-upgrade-854643": docker container inspect missing-upgrade-854643 --format={{.State.Status}}: exit status 1
	stdout:
	
	
	stderr:
	Error response from daemon: No such container: missing-upgrade-854643
	 
	I0110 08:45:01.446250  175395 cli_runner.go:164] Run: docker rm -f -v missing-upgrade-854643
	I0110 08:45:01.466231  175395 cli_runner.go:164] Run: docker container inspect -f {{.Id}} missing-upgrade-854643
	W0110 08:45:01.483150  175395 cli_runner.go:211] docker container inspect -f {{.Id}} missing-upgrade-854643 returned with exit code 1
	I0110 08:45:01.483248  175395 cli_runner.go:164] Run: docker network inspect missing-upgrade-854643 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 08:45:01.501433  175395 cli_runner.go:164] Run: docker network rm missing-upgrade-854643
	I0110 08:45:01.138207  175623 api_server.go:315] stopped: https://192.168.103.2:8443/healthz: Get "https://192.168.103.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0110 08:45:01.138243  175623 api_server.go:299] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0110 08:45:01.795493  175395 fix.go:124] Sleeping 1 second for extra luck!
	I0110 08:45:02.796418  175395 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.103880333Z" level=info msg="RDT not available in the host system"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.103896238Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.104790868Z" level=info msg="Conmon does support the --sync option"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.104810294Z" level=info msg="Conmon does support the --log-global-size-max option"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.104827073Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.105615644Z" level=info msg="Conmon does support the --sync option"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.105629718Z" level=info msg="Conmon does support the --log-global-size-max option"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.110481909Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.110504943Z" level=info msg="Updated default CNI network name to kindnet"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.11119441Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hook
s.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_m
appings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n        container_create_timeout = 240\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n        container_create_timeout = 240\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enf
orcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio
.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.1116253Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.111694289Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.186910664Z" level=info msg="Got pod network &{Name:coredns-7d764666f9-5f9zf Namespace:kube-system ID:d8baea648f36c8e8ac41165012ab62cdd571c4fb738512195db10981f9bd5769 UID:79d1ba64-372e-4517-8266-1498a7d0ae38 NetNS:/var/run/netns/50ed1fa3-3f4d-478a-9cd5-3689f535d17f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000ce4068}] Aliases:map[]}"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.187167996Z" level=info msg="Checking pod kube-system_coredns-7d764666f9-5f9zf for CNI network kindnet (type=ptp)"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.187601986Z" level=info msg="Registered SIGHUP reload watcher"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.187631543Z" level=info msg="Starting seccomp notifier watcher"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.187674328Z" level=info msg="Create NRI interface"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.187800651Z" level=info msg="built-in NRI default validator is disabled"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.187811634Z" level=info msg="runtime interface created"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.187823586Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.187828821Z" level=info msg="runtime interface starting up..."
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.187834286Z" level=info msg="starting plugins..."
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.187845738Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Jan 10 08:44:57 pause-678123 crio[2185]: time="2026-01-10T08:44:57.188178202Z" level=info msg="No systemd watchdog enabled"
	Jan 10 08:44:57 pause-678123 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	f3ea8d5598a5e       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                     13 seconds ago      Running             coredns                   0                   d8baea648f36c       coredns-7d764666f9-5f9zf               kube-system
	5caf2e27402c6       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27   24 seconds ago      Running             kindnet-cni               0                   7eaf71ea67189       kindnet-tpclh                          kube-system
	0ec28ff9a5a23       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                     26 seconds ago      Running             kube-proxy                0                   baa0a42e723c6       kube-proxy-tp5db                       kube-system
	1206eca28b9b9       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                     36 seconds ago      Running             kube-scheduler            0                   769017a846a13       kube-scheduler-pause-678123            kube-system
	855477d18a3a2       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                     36 seconds ago      Running             kube-controller-manager   0                   98ec61b9fc2b3       kube-controller-manager-pause-678123   kube-system
	2b24ef461e6b2       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                     36 seconds ago      Running             kube-apiserver            0                   365dfba9461a5       kube-apiserver-pause-678123            kube-system
	f36022cb1f6eb       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                     37 seconds ago      Running             etcd                      0                   2438073bfc645       etcd-pause-678123                      kube-system
	
	
	==> coredns [f3ea8d5598a5ea9d8e3a390c18f790444ae5cb11b7c0226f0b144b9d05c83a04] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:50957 - 23744 "HINFO IN 8587846740084741196.5300764566522905185. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.017452803s
	
	
	==> describe nodes <==
	Name:               pause-678123
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-678123
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee
	                    minikube.k8s.io/name=pause-678123
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T08_44_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 08:44:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-678123
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 08:44:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 08:44:52 +0000   Sat, 10 Jan 2026 08:44:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 08:44:52 +0000   Sat, 10 Jan 2026 08:44:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 08:44:52 +0000   Sat, 10 Jan 2026 08:44:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 08:44:52 +0000   Sat, 10 Jan 2026 08:44:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-678123
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 73d0d7ed7bc783a051b563196960b035
	  System UUID:                10e1629b-90af-4050-8a44-19154b5a5b56
	  Boot ID:                    7c12f492-59b6-440b-986c-741ece916a23
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7d764666f9-5f9zf                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-pause-678123                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-tpclh                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-pause-678123             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-pause-678123    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-tp5db                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-pause-678123             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  28s   node-controller  Node pause-678123 event: Registered Node pause-678123 in Controller
	
	
	==> dmesg <==
	[Jan10 08:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001659] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.088011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.404004] i8042: Warning: Keylock active
	[  +0.021255] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.508728] block sda: the capability attribute has been deprecated.
	[  +0.091638] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026443] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +5.290756] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [f36022cb1f6ebf0fb589be28e6c4a599c94aa3ed350eedaa9ae426ea44cf5016] <==
	{"level":"info","ts":"2026-01-10T08:44:28.051719Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2026-01-10T08:44:28.598322Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 1"}
	{"level":"info","ts":"2026-01-10T08:44:28.598382Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 1"}
	{"level":"info","ts":"2026-01-10T08:44:28.598456Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2026-01-10T08:44:28.598481Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T08:44:28.598499Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2026-01-10T08:44:28.599277Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T08:44:28.599365Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T08:44:28.599398Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2026-01-10T08:44:28.599411Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T08:44:28.600139Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:pause-678123 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T08:44:28.600187Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T08:44:28.600217Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T08:44:28.600350Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T08:44:28.600464Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T08:44:28.600491Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T08:44:28.601539Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T08:44:28.601525Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T08:44:28.601647Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T08:44:28.601549Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T08:44:28.601716Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T08:44:28.601762Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-10T08:44:28.602031Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2026-01-10T08:44:28.606592Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T08:44:28.606669Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	
	
	==> kernel <==
	 08:45:05 up 27 min,  0 user,  load average: 4.76, 2.32, 1.51
	Linux pause-678123 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5caf2e27402c6bf30d53a931012ff5849fde4125867d91d0913fce93b68c0021] <==
	I0110 08:44:40.307854       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 08:44:40.308271       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0110 08:44:40.308411       1 main.go:148] setting mtu 1500 for CNI 
	I0110 08:44:40.308432       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 08:44:40.308451       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T08:44:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 08:44:40.599035       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 08:44:40.599069       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 08:44:40.599080       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 08:44:40.599195       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 08:44:41.199250       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 08:44:41.199281       1 metrics.go:72] Registering metrics
	I0110 08:44:41.199367       1 controller.go:711] "Syncing nftables rules"
	I0110 08:44:50.512823       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 08:44:50.512913       1 main.go:301] handling current node
	I0110 08:45:00.516229       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 08:45:00.516270       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2b24ef461e6b2db4c3746849bf15b7a5bb7ede8d1ad7b23c1d938ec8e945e86d] <==
	I0110 08:44:29.875772       1 shared_informer.go:377] "Caches are synced"
	I0110 08:44:29.875860       1 policy_source.go:248] refreshing policies
	E0110 08:44:29.879305       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I0110 08:44:29.926668       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 08:44:29.950177       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 08:44:29.950309       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I0110 08:44:29.954453       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 08:44:30.031809       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 08:44:30.730895       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I0110 08:44:30.734675       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I0110 08:44:30.734695       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 08:44:31.195458       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 08:44:31.229411       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 08:44:31.337778       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0110 08:44:31.343399       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0110 08:44:31.344603       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 08:44:31.349305       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 08:44:31.754439       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 08:44:32.413851       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 08:44:32.425094       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0110 08:44:32.433925       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0110 08:44:37.208386       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 08:44:37.212396       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 08:44:37.406028       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 08:44:37.605322       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [855477d18a3a24c8ce5384ee94e9fbbf34b25e8c7221d8828b4fbb3bbb98e8b5] <==
	I0110 08:44:36.561920       1 shared_informer.go:377] "Caches are synced"
	I0110 08:44:36.561986       1 shared_informer.go:377] "Caches are synced"
	I0110 08:44:36.562160       1 shared_informer.go:377] "Caches are synced"
	I0110 08:44:36.562172       1 shared_informer.go:377] "Caches are synced"
	I0110 08:44:36.562196       1 shared_informer.go:377] "Caches are synced"
	I0110 08:44:36.562217       1 shared_informer.go:377] "Caches are synced"
	I0110 08:44:36.562233       1 shared_informer.go:377] "Caches are synced"
	I0110 08:44:36.562257       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I0110 08:44:36.562307       1 shared_informer.go:377] "Caches are synced"
	I0110 08:44:36.562392       1 shared_informer.go:377] "Caches are synced"
	I0110 08:44:36.562458       1 shared_informer.go:377] "Caches are synced"
	I0110 08:44:36.562502       1 shared_informer.go:377] "Caches are synced"
	I0110 08:44:36.562583       1 shared_informer.go:377] "Caches are synced"
	I0110 08:44:36.562650       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-678123"
	I0110 08:44:36.562008       1 shared_informer.go:377] "Caches are synced"
	I0110 08:44:36.562727       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0110 08:44:36.563245       1 shared_informer.go:377] "Caches are synced"
	I0110 08:44:36.567707       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:44:36.570428       1 range_allocator.go:433] "Set node PodCIDR" node="pause-678123" podCIDRs=["10.244.0.0/24"]
	I0110 08:44:36.584778       1 shared_informer.go:377] "Caches are synced"
	I0110 08:44:36.661628       1 shared_informer.go:377] "Caches are synced"
	I0110 08:44:36.661647       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 08:44:36.661653       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 08:44:36.668130       1 shared_informer.go:377] "Caches are synced"
	I0110 08:44:51.564038       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [0ec28ff9a5a230ef38b5b4a4e2fb64fdcf59fa83b33592705b9c7c3586711f1c] <==
	I0110 08:44:38.101696       1 server_linux.go:53] "Using iptables proxy"
	I0110 08:44:38.185224       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:44:38.286207       1 shared_informer.go:377] "Caches are synced"
	I0110 08:44:38.286303       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0110 08:44:38.286456       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 08:44:38.337577       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 08:44:38.351860       1 server_linux.go:136] "Using iptables Proxier"
	I0110 08:44:38.388352       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 08:44:38.389677       1 server.go:529] "Version info" version="v1.35.0"
	I0110 08:44:38.389991       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 08:44:38.391630       1 config.go:200] "Starting service config controller"
	I0110 08:44:38.394283       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 08:44:38.393280       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 08:44:38.394442       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 08:44:38.393204       1 config.go:106] "Starting endpoint slice config controller"
	I0110 08:44:38.394494       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 08:44:38.392856       1 config.go:309] "Starting node config controller"
	I0110 08:44:38.394541       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 08:44:38.394564       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 08:44:38.495289       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0110 08:44:38.495362       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 08:44:38.495396       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [1206eca28b9b971a40e042d2cbfbee5210ee9fec259b792781c02e55620e92db] <==
	E0110 08:44:29.792575       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0110 08:44:29.792522       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0110 08:44:29.793022       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0110 08:44:29.793076       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0110 08:44:29.793120       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0110 08:44:29.793246       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0110 08:44:29.793336       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0110 08:44:29.793554       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0110 08:44:29.793675       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0110 08:44:29.794100       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0110 08:44:29.794276       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 08:44:29.794331       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0110 08:44:29.794346       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0110 08:44:30.616575       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0110 08:44:30.703335       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0110 08:44:30.761581       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0110 08:44:30.767013       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0110 08:44:30.836262       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E0110 08:44:30.856511       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0110 08:44:30.879002       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0110 08:44:30.881800       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0110 08:44:30.920377       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0110 08:44:30.964878       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0110 08:44:30.985102       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	I0110 08:44:32.784980       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 08:44:37 pause-678123 kubelet[1280]: I0110 08:44:37.683712    1280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlldx\" (UniqueName: \"kubernetes.io/projected/92ff064a-cbcf-4754-924c-2b7be0b8d914-kube-api-access-tlldx\") pod \"kube-proxy-tp5db\" (UID: \"92ff064a-cbcf-4754-924c-2b7be0b8d914\") " pod="kube-system/kube-proxy-tp5db"
	Jan 10 08:44:37 pause-678123 kubelet[1280]: I0110 08:44:37.683727    1280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/92ff064a-cbcf-4754-924c-2b7be0b8d914-xtables-lock\") pod \"kube-proxy-tp5db\" (UID: \"92ff064a-cbcf-4754-924c-2b7be0b8d914\") " pod="kube-system/kube-proxy-tp5db"
	Jan 10 08:44:37 pause-678123 kubelet[1280]: I0110 08:44:37.683776    1280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-857xp\" (UniqueName: \"kubernetes.io/projected/ed69837a-7d90-4463-b544-c354590fc785-kube-api-access-857xp\") pod \"kindnet-tpclh\" (UID: \"ed69837a-7d90-4463-b544-c354590fc785\") " pod="kube-system/kindnet-tpclh"
	Jan 10 08:44:38 pause-678123 kubelet[1280]: I0110 08:44:38.302763    1280 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-tp5db" podStartSLOduration=1.302729465 podStartE2EDuration="1.302729465s" podCreationTimestamp="2026-01-10 08:44:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 08:44:38.301780877 +0000 UTC m=+6.145475095" watchObservedRunningTime="2026-01-10 08:44:38.302729465 +0000 UTC m=+6.146423685"
	Jan 10 08:44:40 pause-678123 kubelet[1280]: I0110 08:44:40.305782    1280 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-tpclh" podStartSLOduration=1.19266468 podStartE2EDuration="3.30576562s" podCreationTimestamp="2026-01-10 08:44:37 +0000 UTC" firstStartedPulling="2026-01-10 08:44:37.958493893 +0000 UTC m=+5.802188106" lastFinishedPulling="2026-01-10 08:44:40.071594836 +0000 UTC m=+7.915289046" observedRunningTime="2026-01-10 08:44:40.305729709 +0000 UTC m=+8.149423918" watchObservedRunningTime="2026-01-10 08:44:40.30576562 +0000 UTC m=+8.149459849"
	Jan 10 08:44:40 pause-678123 kubelet[1280]: E0110 08:44:40.539193    1280 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-678123" containerName="kube-apiserver"
	Jan 10 08:44:41 pause-678123 kubelet[1280]: E0110 08:44:41.216558    1280 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-678123" containerName="kube-scheduler"
	Jan 10 08:44:41 pause-678123 kubelet[1280]: E0110 08:44:41.419438    1280 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-678123" containerName="etcd"
	Jan 10 08:44:45 pause-678123 kubelet[1280]: E0110 08:44:45.387142    1280 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-pause-678123" containerName="kube-controller-manager"
	Jan 10 08:44:50 pause-678123 kubelet[1280]: E0110 08:44:50.545847    1280 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-pause-678123" containerName="kube-apiserver"
	Jan 10 08:44:50 pause-678123 kubelet[1280]: I0110 08:44:50.933832    1280 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Jan 10 08:44:51 pause-678123 kubelet[1280]: I0110 08:44:51.084937    1280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/79d1ba64-372e-4517-8266-1498a7d0ae38-config-volume\") pod \"coredns-7d764666f9-5f9zf\" (UID: \"79d1ba64-372e-4517-8266-1498a7d0ae38\") " pod="kube-system/coredns-7d764666f9-5f9zf"
	Jan 10 08:44:51 pause-678123 kubelet[1280]: I0110 08:44:51.084997    1280 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8l9h\" (UniqueName: \"kubernetes.io/projected/79d1ba64-372e-4517-8266-1498a7d0ae38-kube-api-access-d8l9h\") pod \"coredns-7d764666f9-5f9zf\" (UID: \"79d1ba64-372e-4517-8266-1498a7d0ae38\") " pod="kube-system/coredns-7d764666f9-5f9zf"
	Jan 10 08:44:51 pause-678123 kubelet[1280]: E0110 08:44:51.221153    1280 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-pause-678123" containerName="kube-scheduler"
	Jan 10 08:44:51 pause-678123 kubelet[1280]: E0110 08:44:51.420877    1280 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-pause-678123" containerName="etcd"
	Jan 10 08:44:52 pause-678123 kubelet[1280]: E0110 08:44:52.322406    1280 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-5f9zf" containerName="coredns"
	Jan 10 08:44:52 pause-678123 kubelet[1280]: I0110 08:44:52.332955    1280 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-5f9zf" podStartSLOduration=15.332934291 podStartE2EDuration="15.332934291s" podCreationTimestamp="2026-01-10 08:44:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 08:44:52.332835426 +0000 UTC m=+20.176529644" watchObservedRunningTime="2026-01-10 08:44:52.332934291 +0000 UTC m=+20.176628507"
	Jan 10 08:44:53 pause-678123 kubelet[1280]: E0110 08:44:53.324157    1280 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-5f9zf" containerName="coredns"
	Jan 10 08:44:54 pause-678123 kubelet[1280]: E0110 08:44:54.328726    1280 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-5f9zf" containerName="coredns"
	Jan 10 08:44:57 pause-678123 kubelet[1280]: E0110 08:44:57.273788    1280 kubelet.go:3130] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized"
	Jan 10 08:45:00 pause-678123 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 08:45:00 pause-678123 kubelet[1280]: I0110 08:45:00.877932    1280 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Jan 10 08:45:00 pause-678123 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 08:45:00 pause-678123 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 08:45:00 pause-678123 systemd[1]: kubelet.service: Consumed 1.250s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-678123 -n pause-678123
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-678123 -n pause-678123: exit status 2 (333.524446ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-678123 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (5.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-093083 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-093083 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (522.790448ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:53:03Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-093083 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-093083 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-093083 describe deploy/metrics-server -n kube-system: exit status 1 (61.92173ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-093083 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-093083
helpers_test.go:244: (dbg) docker inspect old-k8s-version-093083:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5a78f6c87c300c6e0dd5f3bbe31e9ea3aaf153e366cdb192469aeff0f98607cc",
	        "Created": "2026-01-10T08:52:09.133397359Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 289398,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T08:52:09.430402776Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e8ee619afcf8e9008c4fe3e7f2aba5fdbe7a9f0765053ca7dd53ab0df8ab02a5",
	        "ResolvConfPath": "/var/lib/docker/containers/5a78f6c87c300c6e0dd5f3bbe31e9ea3aaf153e366cdb192469aeff0f98607cc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5a78f6c87c300c6e0dd5f3bbe31e9ea3aaf153e366cdb192469aeff0f98607cc/hostname",
	        "HostsPath": "/var/lib/docker/containers/5a78f6c87c300c6e0dd5f3bbe31e9ea3aaf153e366cdb192469aeff0f98607cc/hosts",
	        "LogPath": "/var/lib/docker/containers/5a78f6c87c300c6e0dd5f3bbe31e9ea3aaf153e366cdb192469aeff0f98607cc/5a78f6c87c300c6e0dd5f3bbe31e9ea3aaf153e366cdb192469aeff0f98607cc-json.log",
	        "Name": "/old-k8s-version-093083",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-093083:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-093083",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5a78f6c87c300c6e0dd5f3bbe31e9ea3aaf153e366cdb192469aeff0f98607cc",
	                "LowerDir": "/var/lib/docker/overlay2/d070b5e56f95f0eb086a5bbe43eeabd880e14f061ffc4bc06dcbc47a66b72ad3-init/diff:/var/lib/docker/overlay2/97b14eb520192356c991915ce74cd6094e6ef948f398ac688b667ea18491ccde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d070b5e56f95f0eb086a5bbe43eeabd880e14f061ffc4bc06dcbc47a66b72ad3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d070b5e56f95f0eb086a5bbe43eeabd880e14f061ffc4bc06dcbc47a66b72ad3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d070b5e56f95f0eb086a5bbe43eeabd880e14f061ffc4bc06dcbc47a66b72ad3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-093083",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-093083/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-093083",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-093083",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-093083",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "7385a0e5b0271e0f76069f6e5c8a1311122faa34713efda09164f3d6945e7f5d",
	            "SandboxKey": "/var/run/docker/netns/7385a0e5b027",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-093083": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b8ccbd7d681c9cf4758976716607eccd2bce1e9581afb9f0c4894b2bbb7e4533",
	                    "EndpointID": "092ceaaec645e37aaa56b7bc2a9920acc020de32a3a7706fb6aedf4a489716bd",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "c2:0c:ca:a6:3d:95",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-093083",
	                        "5a78f6c87c30"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-093083 -n old-k8s-version-093083
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-093083 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-093083 logs -n 25: (1.723150978s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p flannel-472660 sudo systemctl cat kubelet --no-pager                                                                                                                  │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                   │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                  │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo cat /var/lib/kubelet/config.yaml                                                                                                                  │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo systemctl status docker --all --full --no-pager                                                                                                   │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │                     │
	│ ssh     │ -p flannel-472660 sudo systemctl cat docker --no-pager                                                                                                                   │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo cat /etc/docker/daemon.json                                                                                                                       │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │                     │
	│ ssh     │ -p flannel-472660 sudo docker system info                                                                                                                                │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │                     │
	│ ssh     │ -p flannel-472660 sudo systemctl status cri-docker --all --full --no-pager                                                                                               │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │                     │
	│ ssh     │ -p flannel-472660 sudo systemctl cat cri-docker --no-pager                                                                                                               │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                          │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │                     │
	│ ssh     │ -p flannel-472660 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                    │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo cri-dockerd --version                                                                                                                             │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo systemctl status containerd --all --full --no-pager                                                                                               │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │                     │
	│ ssh     │ -p flannel-472660 sudo systemctl cat containerd --no-pager                                                                                                               │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo cat /lib/systemd/system/containerd.service                                                                                                        │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo cat /etc/containerd/config.toml                                                                                                                   │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo containerd config dump                                                                                                                            │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo systemctl status crio --all --full --no-pager                                                                                                     │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo systemctl cat crio --no-pager                                                                                                                     │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                           │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo crio config                                                                                                                                       │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ delete  │ -p disable-driver-mounts-847921                                                                                                                                          │ disable-driver-mounts-847921 │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ start   │ -p default-k8s-diff-port-225354 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ default-k8s-diff-port-225354 │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-093083 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                             │ old-k8s-version-093083       │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 08:53:00
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 08:53:00.232230  307694 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:53:00.232433  307694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:53:00.232441  307694 out.go:374] Setting ErrFile to fd 2...
	I0110 08:53:00.232446  307694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:53:00.232655  307694 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:53:00.233181  307694 out.go:368] Setting JSON to false
	I0110 08:53:00.234330  307694 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2132,"bootTime":1768033048,"procs":316,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 08:53:00.234386  307694 start.go:143] virtualization: kvm guest
	I0110 08:53:00.236457  307694 out.go:179] * [default-k8s-diff-port-225354] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 08:53:00.237710  307694 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 08:53:00.237716  307694 notify.go:221] Checking for updates...
	I0110 08:53:00.240163  307694 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 08:53:00.241309  307694 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:53:00.242412  307694 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-3641/.minikube
	I0110 08:53:00.243549  307694 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 08:53:00.244661  307694 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 08:53:00.246308  307694 config.go:182] Loaded profile config "embed-certs-072273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:53:00.246396  307694 config.go:182] Loaded profile config "no-preload-095312": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:53:00.246463  307694 config.go:182] Loaded profile config "old-k8s-version-093083": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0110 08:53:00.246544  307694 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 08:53:00.272207  307694 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 08:53:00.272347  307694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:53:00.332314  307694 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2026-01-10 08:53:00.321137687 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:53:00.332467  307694 docker.go:319] overlay module found
	I0110 08:53:00.334482  307694 out.go:179] * Using the docker driver based on user configuration
	I0110 08:53:00.335645  307694 start.go:309] selected driver: docker
	I0110 08:53:00.335662  307694 start.go:928] validating driver "docker" against <nil>
	I0110 08:53:00.335672  307694 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 08:53:00.336237  307694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:53:00.399865  307694 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2026-01-10 08:53:00.389564509 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:53:00.400051  307694 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 08:53:00.400296  307694 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 08:53:00.402176  307694 out.go:179] * Using Docker driver with root privileges
	I0110 08:53:00.403301  307694 cni.go:84] Creating CNI manager for ""
	I0110 08:53:00.403364  307694 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 08:53:00.403374  307694 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 08:53:00.403427  307694 start.go:353] cluster config:
	{Name:default-k8s-diff-port-225354 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-225354 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:53:00.404647  307694 out.go:179] * Starting "default-k8s-diff-port-225354" primary control-plane node in "default-k8s-diff-port-225354" cluster
	I0110 08:53:00.405946  307694 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 08:53:00.407140  307694 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 08:53:00.408253  307694 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 08:53:00.408281  307694 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0110 08:53:00.408299  307694 cache.go:65] Caching tarball of preloaded images
	I0110 08:53:00.408349  307694 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 08:53:00.408374  307694 preload.go:251] Found /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0110 08:53:00.408381  307694 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 08:53:00.408463  307694 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/default-k8s-diff-port-225354/config.json ...
	I0110 08:53:00.408484  307694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/default-k8s-diff-port-225354/config.json: {Name:mk766c10af2c7986d1fdfd4b0318d94a64b10e07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:53:00.428887  307694 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 08:53:00.428913  307694 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 08:53:00.428934  307694 cache.go:243] Successfully downloaded all kic artifacts
	I0110 08:53:00.428967  307694 start.go:360] acquireMachinesLock for default-k8s-diff-port-225354: {Name:mk6f4cf32f69b6a51f12f83adcd3cd0eb0ae8cbf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 08:53:00.429083  307694 start.go:364] duration metric: took 95.988µs to acquireMachinesLock for "default-k8s-diff-port-225354"
	I0110 08:53:00.429115  307694 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-225354 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-225354 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 08:53:00.429207  307694 start.go:125] createHost starting for "" (driver="docker")
	I0110 08:52:58.492303  299436 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0110 08:52:58.496554  299436 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I0110 08:52:58.496570  299436 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I0110 08:52:58.509553  299436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0110 08:52:58.721371  299436 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0110 08:52:58.721446  299436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:52:58.721475  299436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-072273 minikube.k8s.io/updated_at=2026_01_10T08_52_58_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee minikube.k8s.io/name=embed-certs-072273 minikube.k8s.io/primary=true
	I0110 08:52:58.733297  299436 ops.go:34] apiserver oom_adj: -16
	I0110 08:52:58.806656  299436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:52:59.306881  299436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:52:59.807675  299436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:53:00.306858  299436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:53:00.806961  299436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:53:01.306885  299436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:53:01.806763  299436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:53:02.307631  299436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:53:02.807657  299436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:53:02.888947  299436 kubeadm.go:1114] duration metric: took 4.167561586s to wait for elevateKubeSystemPrivileges
	I0110 08:53:02.888995  299436 kubeadm.go:403] duration metric: took 11.874633956s to StartCluster
	I0110 08:53:02.889016  299436 settings.go:142] acquiring lock: {Name:mkbb32fc6441ceb31ce2923ea8999f8375298f08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:53:02.889100  299436 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:53:02.891136  299436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/kubeconfig: {Name:mk0d29b3b0ee1fd71729aff31f901be50a2d0664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:53:02.891405  299436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0110 08:53:02.891424  299436 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 08:53:02.891509  299436 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 08:53:02.891592  299436 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-072273"
	I0110 08:53:02.891611  299436 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-072273"
	I0110 08:53:02.891631  299436 config.go:182] Loaded profile config "embed-certs-072273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:53:02.891647  299436 addons.go:70] Setting default-storageclass=true in profile "embed-certs-072273"
	I0110 08:53:02.891660  299436 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-072273"
	I0110 08:53:02.891642  299436 host.go:66] Checking if "embed-certs-072273" exists ...
	I0110 08:53:02.892052  299436 cli_runner.go:164] Run: docker container inspect embed-certs-072273 --format={{.State.Status}}
	I0110 08:53:02.892196  299436 cli_runner.go:164] Run: docker container inspect embed-certs-072273 --format={{.State.Status}}
	I0110 08:53:02.893822  299436 out.go:179] * Verifying Kubernetes components...
	I0110 08:53:02.896106  299436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:53:02.924537  299436 addons.go:239] Setting addon default-storageclass=true in "embed-certs-072273"
	I0110 08:53:02.924573  299436 host.go:66] Checking if "embed-certs-072273" exists ...
	I0110 08:53:02.925313  299436 cli_runner.go:164] Run: docker container inspect embed-certs-072273 --format={{.State.Status}}
	I0110 08:53:02.925372  299436 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> CRI-O <==
	Jan 10 08:52:51 old-k8s-version-093083 crio[774]: time="2026-01-10T08:52:51.813501686Z" level=info msg="Starting container: 3cf5dd4d7fd54b85b52da73a225204608882f60c8e8b69ddf08bf85342938f2e" id=b388feb2-12d5-4c21-a151-f50a0b1c5269 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 08:52:51 old-k8s-version-093083 crio[774]: time="2026-01-10T08:52:51.815824407Z" level=info msg="Started container" PID=2168 containerID=3cf5dd4d7fd54b85b52da73a225204608882f60c8e8b69ddf08bf85342938f2e description=kube-system/coredns-5dd5756b68-sscts/coredns id=b388feb2-12d5-4c21-a151-f50a0b1c5269 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0efa23e25ec0645803564fd45c2c465f99a112baf65deb48b0895dd0f49f3a5d
	Jan 10 08:52:55 old-k8s-version-093083 crio[774]: time="2026-01-10T08:52:55.195278854Z" level=info msg="Running pod sandbox: default/busybox/POD" id=d12d9ac7-4d30-4927-ad72-6640b3a38754 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 08:52:55 old-k8s-version-093083 crio[774]: time="2026-01-10T08:52:55.195372126Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:52:55 old-k8s-version-093083 crio[774]: time="2026-01-10T08:52:55.20103424Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f8768c52736b14616548b7895a7fbe256b0a363330158fd5929c862ae4dfc7ed UID:79d1d319-c830-45c8-ae4c-0e12a1b99481 NetNS:/var/run/netns/e6552f2d-4137-4be7-8603-6af5b0a63ec6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0008b0688}] Aliases:map[]}"
	Jan 10 08:52:55 old-k8s-version-093083 crio[774]: time="2026-01-10T08:52:55.201081992Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Jan 10 08:52:55 old-k8s-version-093083 crio[774]: time="2026-01-10T08:52:55.222842268Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f8768c52736b14616548b7895a7fbe256b0a363330158fd5929c862ae4dfc7ed UID:79d1d319-c830-45c8-ae4c-0e12a1b99481 NetNS:/var/run/netns/e6552f2d-4137-4be7-8603-6af5b0a63ec6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0008b0688}] Aliases:map[]}"
	Jan 10 08:52:55 old-k8s-version-093083 crio[774]: time="2026-01-10T08:52:55.223037776Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Jan 10 08:52:55 old-k8s-version-093083 crio[774]: time="2026-01-10T08:52:55.223947412Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Jan 10 08:52:55 old-k8s-version-093083 crio[774]: time="2026-01-10T08:52:55.225097137Z" level=info msg="Ran pod sandbox f8768c52736b14616548b7895a7fbe256b0a363330158fd5929c862ae4dfc7ed with infra container: default/busybox/POD" id=d12d9ac7-4d30-4927-ad72-6640b3a38754 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 08:52:55 old-k8s-version-093083 crio[774]: time="2026-01-10T08:52:55.226476326Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=13788ea2-9ebd-4702-8dce-fd383ad60960 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:52:55 old-k8s-version-093083 crio[774]: time="2026-01-10T08:52:55.226619765Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=13788ea2-9ebd-4702-8dce-fd383ad60960 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:52:55 old-k8s-version-093083 crio[774]: time="2026-01-10T08:52:55.226717667Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=13788ea2-9ebd-4702-8dce-fd383ad60960 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:52:55 old-k8s-version-093083 crio[774]: time="2026-01-10T08:52:55.227581776Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3b3462f1-2e60-4fa9-aa08-9e6d38d7ef65 name=/runtime.v1.ImageService/PullImage
	Jan 10 08:52:55 old-k8s-version-093083 crio[774]: time="2026-01-10T08:52:55.228160373Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Jan 10 08:52:56 old-k8s-version-093083 crio[774]: time="2026-01-10T08:52:56.525427362Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=3b3462f1-2e60-4fa9-aa08-9e6d38d7ef65 name=/runtime.v1.ImageService/PullImage
	Jan 10 08:52:56 old-k8s-version-093083 crio[774]: time="2026-01-10T08:52:56.529009848Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=389fa5db-1a63-46e5-8f3d-6eb7aeb2c4fa name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:52:56 old-k8s-version-093083 crio[774]: time="2026-01-10T08:52:56.530871855Z" level=info msg="Creating container: default/busybox/busybox" id=eaff442f-cd08-4291-858e-72682a79bb80 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:52:56 old-k8s-version-093083 crio[774]: time="2026-01-10T08:52:56.531023291Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:52:56 old-k8s-version-093083 crio[774]: time="2026-01-10T08:52:56.535803072Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:52:56 old-k8s-version-093083 crio[774]: time="2026-01-10T08:52:56.536479346Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:52:56 old-k8s-version-093083 crio[774]: time="2026-01-10T08:52:56.57542949Z" level=info msg="Created container bde911bacad9d2801cbf40bb5889ccd9ce56b0a310274ad573c10daa14f9c254: default/busybox/busybox" id=eaff442f-cd08-4291-858e-72682a79bb80 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:52:56 old-k8s-version-093083 crio[774]: time="2026-01-10T08:52:56.576361841Z" level=info msg="Starting container: bde911bacad9d2801cbf40bb5889ccd9ce56b0a310274ad573c10daa14f9c254" id=6845dc34-d284-41b6-86fe-62d8d2711cd5 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 08:52:56 old-k8s-version-093083 crio[774]: time="2026-01-10T08:52:56.578972152Z" level=info msg="Started container" PID=2242 containerID=bde911bacad9d2801cbf40bb5889ccd9ce56b0a310274ad573c10daa14f9c254 description=default/busybox/busybox id=6845dc34-d284-41b6-86fe-62d8d2711cd5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f8768c52736b14616548b7895a7fbe256b0a363330158fd5929c862ae4dfc7ed
	Jan 10 08:53:03 old-k8s-version-093083 crio[774]: time="2026-01-10T08:53:03.03396044Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	bde911bacad9d       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   f8768c52736b1       busybox                                          default
	3cf5dd4d7fd54       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 seconds ago      Running             coredns                   0                   0efa23e25ec06       coredns-5dd5756b68-sscts                         kube-system
	2e5c56f70c39a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   5a28766f5a6d7       storage-provisioner                              kube-system
	6b4ac5c916ee5       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    24 seconds ago      Running             kindnet-cni               0                   483bba14219e9       kindnet-nn64b                                    kube-system
	65578a0315ac0       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      26 seconds ago      Running             kube-proxy                0                   efefd94ab2f6d       kube-proxy-r7qzb                                 kube-system
	3125f8f4ec9c9       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      45 seconds ago      Running             etcd                      0                   0ecc1c02b76e7       etcd-old-k8s-version-093083                      kube-system
	5b913201ccd79       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      45 seconds ago      Running             kube-scheduler            0                   559fb5fc56300       kube-scheduler-old-k8s-version-093083            kube-system
	479d6702e1c32       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      45 seconds ago      Running             kube-controller-manager   0                   ea9811c28e00e       kube-controller-manager-old-k8s-version-093083   kube-system
	46425af1e7d61       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      45 seconds ago      Running             kube-apiserver            0                   2c43306bfd775       kube-apiserver-old-k8s-version-093083            kube-system
	
	
	==> coredns [3cf5dd4d7fd54b85b52da73a225204608882f60c8e8b69ddf08bf85342938f2e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:38562 - 28850 "HINFO IN 5022437403495380697.4412108202642488888. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018769359s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-093083
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-093083
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee
	                    minikube.k8s.io/name=old-k8s-version-093083
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T08_52_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 08:52:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-093083
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 08:52:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 08:52:56 +0000   Sat, 10 Jan 2026 08:52:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 08:52:56 +0000   Sat, 10 Jan 2026 08:52:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 08:52:56 +0000   Sat, 10 Jan 2026 08:52:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 08:52:56 +0000   Sat, 10 Jan 2026 08:52:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-093083
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 73d0d7ed7bc783a051b563196960b035
	  System UUID:                c7a82a71-54f6-4520-9c6e-142f796b8561
	  Boot ID:                    7c12f492-59b6-440b-986c-741ece916a23
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-sscts                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-old-k8s-version-093083                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         40s
	  kube-system                 kindnet-nn64b                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-093083             250m (3%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-controller-manager-old-k8s-version-093083    200m (2%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-r7qzb                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-old-k8s-version-093083             100m (1%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 40s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s   kubelet          Node old-k8s-version-093083 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s   kubelet          Node old-k8s-version-093083 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s   kubelet          Node old-k8s-version-093083 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node old-k8s-version-093083 event: Registered Node old-k8s-version-093083 in Controller
	  Normal  NodeReady                14s   kubelet          Node old-k8s-version-093083 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 4a 03 46 88 66 4e 08 06
	[  +6.406566] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7a 4f c9 a1 45 f4 08 06
	[  +0.001429] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca 5b 5b a9 7c eb 08 06
	[ +24.002020] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7a 6f 49 7e 9d d1 08 06
	[  +0.000366] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e 3a ac 18 98 c9 08 06
	[Jan10 08:52] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4a 05 5c 86 cf df 08 06
	[  +0.000337] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 7a 4f c9 a1 45 f4 08 06
	[  +0.000602] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca 5b 5b a9 7c eb 08 06
	[  +0.739059] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e 32 81 dc 0a 7c 08 06
	[  +1.988700] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 6e ec e4 aa 3d 08 06
	[ +20.040962] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 ac f8 cf fb 84 08 06
	[  +0.000375] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 6e ec e4 aa 3d 08 06
	[  +0.000915] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 32 81 dc 0a 7c 08 06
	
	
	==> etcd [3125f8f4ec9c981e21bd1e05fb5b546f9b311216358fa8d1a514dcfaf39fa282] <==
	{"level":"info","ts":"2026-01-10T08:52:21.045929Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 1"}
	{"level":"info","ts":"2026-01-10T08:52:21.045951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 1"}
	{"level":"info","ts":"2026-01-10T08:52:21.045969Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 2"}
	{"level":"info","ts":"2026-01-10T08:52:21.045975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2026-01-10T08:52:21.045983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2026-01-10T08:52:21.04599Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2026-01-10T08:52:21.046969Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-093083 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T08:52:21.04703Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T08:52:21.047102Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T08:52:21.047103Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-10T08:52:21.047228Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T08:52:21.047323Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T08:52:21.047779Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-10T08:52:21.048348Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-10T08:52:21.048438Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-10T08:52:21.049319Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T08:52:21.04935Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"warn","ts":"2026-01-10T08:52:25.662122Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.957882ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-node-lease/old-k8s-version-093083\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2026-01-10T08:52:25.662223Z","caller":"traceutil/trace.go:171","msg":"trace[2131596700] range","detail":"{range_begin:/registry/leases/kube-node-lease/old-k8s-version-093083; range_end:; response_count:0; response_revision:226; }","duration":"148.092273ms","start":"2026-01-10T08:52:25.514109Z","end":"2026-01-10T08:52:25.662201Z","steps":["trace[2131596700] 'agreement among raft nodes before linearized reading'  (duration: 147.875458ms)"],"step_count":1}
	{"level":"info","ts":"2026-01-10T08:52:25.830939Z","caller":"traceutil/trace.go:171","msg":"trace[575365853] linearizableReadLoop","detail":"{readStateIndex:241; appliedIndex:239; }","duration":"140.578255ms","start":"2026-01-10T08:52:25.690345Z","end":"2026-01-10T08:52:25.830923Z","steps":["trace[575365853] 'read index received'  (duration: 59.427234ms)","trace[575365853] 'applied index is now lower than readState.Index'  (duration: 81.150135ms)"],"step_count":2}
	{"level":"info","ts":"2026-01-10T08:52:25.830949Z","caller":"traceutil/trace.go:171","msg":"trace[1252313361] transaction","detail":"{read_only:false; response_revision:234; number_of_response:1; }","duration":"140.765189ms","start":"2026-01-10T08:52:25.690129Z","end":"2026-01-10T08:52:25.830894Z","steps":["trace[1252313361] 'process raft request'  (duration: 112.639696ms)","trace[1252313361] 'compare'  (duration: 27.878952ms)"],"step_count":2}
	{"level":"info","ts":"2026-01-10T08:52:25.831059Z","caller":"traceutil/trace.go:171","msg":"trace[1158512245] transaction","detail":"{read_only:false; response_revision:235; number_of_response:1; }","duration":"132.963177ms","start":"2026-01-10T08:52:25.698046Z","end":"2026-01-10T08:52:25.831009Z","steps":["trace[1158512245] 'process raft request'  (duration: 132.770947ms)"],"step_count":1}
	{"level":"warn","ts":"2026-01-10T08:52:25.831194Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.844168ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ranges/serviceips\" ","response":"range_response_count:1 size:115"}
	{"level":"info","ts":"2026-01-10T08:52:25.831239Z","caller":"traceutil/trace.go:171","msg":"trace[91211377] range","detail":"{range_begin:/registry/ranges/serviceips; range_end:; response_count:1; response_revision:236; }","duration":"140.907034ms","start":"2026-01-10T08:52:25.690322Z","end":"2026-01-10T08:52:25.831229Z","steps":["trace[91211377] 'agreement among raft nodes before linearized reading'  (duration: 140.668512ms)"],"step_count":1}
	{"level":"info","ts":"2026-01-10T08:52:41.872126Z","caller":"traceutil/trace.go:171","msg":"trace[1382072517] transaction","detail":"{read_only:false; response_revision:389; number_of_response:1; }","duration":"224.440169ms","start":"2026-01-10T08:52:41.647646Z","end":"2026-01-10T08:52:41.872086Z","steps":["trace[1382072517] 'process raft request'  (duration: 129.564882ms)","trace[1382072517] 'compare'  (duration: 94.717202ms)"],"step_count":2}
	
	
	==> kernel <==
	 08:53:05 up 35 min,  0 user,  load average: 8.50, 4.37, 2.61
	Linux old-k8s-version-093083 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6b4ac5c916ee52a950e91e9b20794dbfbb97cc3b7277e830a11b04f21e3d7ccf] <==
	I0110 08:52:40.850134       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 08:52:40.911048       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I0110 08:52:40.911354       1 main.go:148] setting mtu 1500 for CNI 
	I0110 08:52:40.911388       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 08:52:40.911412       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T08:52:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 08:52:41.212239       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 08:52:41.212335       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 08:52:41.212350       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 08:52:41.212858       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 08:52:41.612903       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 08:52:41.612935       1 metrics.go:72] Registering metrics
	I0110 08:52:41.612990       1 controller.go:711] "Syncing nftables rules"
	I0110 08:52:51.216828       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0110 08:52:51.216894       1 main.go:301] handling current node
	I0110 08:53:01.215384       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0110 08:53:01.215455       1 main.go:301] handling current node
	
	
	==> kube-apiserver [46425af1e7d616b129dd16102a0d1e7fe137ee9303a6ab52654b94588d4aa25d] <==
	I0110 08:52:22.274832       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0110 08:52:22.274903       1 shared_informer.go:318] Caches are synced for configmaps
	I0110 08:52:22.274840       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0110 08:52:22.275840       1 controller.go:624] quota admission added evaluator for: namespaces
	I0110 08:52:22.284981       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0110 08:52:22.285022       1 aggregator.go:166] initial CRD sync complete...
	I0110 08:52:22.285031       1 autoregister_controller.go:141] Starting autoregister controller
	I0110 08:52:22.285038       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0110 08:52:22.285046       1 cache.go:39] Caches are synced for autoregister controller
	I0110 08:52:22.300288       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 08:52:23.180103       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0110 08:52:23.183677       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0110 08:52:23.183699       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0110 08:52:23.640225       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 08:52:23.684328       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 08:52:23.787240       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0110 08:52:23.793669       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I0110 08:52:23.795087       1 controller.go:624] quota admission added evaluator for: endpoints
	I0110 08:52:23.799054       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 08:52:24.225953       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0110 08:52:25.671294       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0110 08:52:25.837505       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0110 08:52:25.848448       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0110 08:52:37.315846       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0110 08:52:38.217207       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [479d6702e1c32a53f6a7183d25e81a93e6fd70a5f46ba9f0e58b4c1d44afba79] <==
	I0110 08:52:37.421045       1 shared_informer.go:318] Caches are synced for resource quota
	I0110 08:52:37.461328       1 shared_informer.go:318] Caches are synced for disruption
	I0110 08:52:37.516597       1 shared_informer.go:318] Caches are synced for resource quota
	I0110 08:52:37.840872       1 shared_informer.go:318] Caches are synced for garbage collector
	I0110 08:52:37.861812       1 shared_informer.go:318] Caches are synced for garbage collector
	I0110 08:52:37.861976       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0110 08:52:38.223285       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I0110 08:52:38.321684       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-fcfrd"
	I0110 08:52:38.328190       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-sscts"
	I0110 08:52:38.335196       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="112.819449ms"
	I0110 08:52:38.345545       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.294136ms"
	I0110 08:52:38.345683       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="92.07µs"
	I0110 08:52:38.940860       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0110 08:52:38.967751       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-fcfrd"
	I0110 08:52:38.988140       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="48.825045ms"
	I0110 08:52:39.003296       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.072006ms"
	I0110 08:52:39.005562       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="749.589µs"
	I0110 08:52:51.426946       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="136.229µs"
	I0110 08:52:51.455655       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="223.452µs"
	I0110 08:52:52.264659       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0110 08:52:52.265193       1 event.go:307] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	I0110 08:52:52.265215       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68-sscts" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-5dd5756b68-sscts"
	I0110 08:52:52.613116       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="134.241µs"
	I0110 08:52:52.647106       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.015395ms"
	I0110 08:52:52.647226       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="75.508µs"
	
	
	==> kube-proxy [65578a0315ac0ce61e1bef4d9df20665d5dc5c712fc156c674c6887162293935] <==
	I0110 08:52:38.363873       1 server_others.go:69] "Using iptables proxy"
	I0110 08:52:38.375890       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I0110 08:52:38.403522       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 08:52:38.406543       1 server_others.go:152] "Using iptables Proxier"
	I0110 08:52:38.406583       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0110 08:52:38.406591       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0110 08:52:38.406638       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0110 08:52:38.406987       1 server.go:846] "Version info" version="v1.28.0"
	I0110 08:52:38.407090       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 08:52:38.407863       1 config.go:188] "Starting service config controller"
	I0110 08:52:38.409475       1 config.go:315] "Starting node config controller"
	I0110 08:52:38.409524       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0110 08:52:38.409709       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0110 08:52:38.409905       1 config.go:97] "Starting endpoint slice config controller"
	I0110 08:52:38.410017       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0110 08:52:38.510032       1 shared_informer.go:318] Caches are synced for service config
	I0110 08:52:38.510089       1 shared_informer.go:318] Caches are synced for node config
	I0110 08:52:38.510137       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5b913201ccd794b482dc3f37e5e098b43a4bc97f231a1094995cf85407e7dbb9] <==
	W0110 08:52:22.266671       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0110 08:52:22.267138       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0110 08:52:22.266651       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0110 08:52:22.267159       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0110 08:52:22.266727       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0110 08:52:22.267221       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0110 08:52:22.266872       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0110 08:52:22.267250       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0110 08:52:22.266642       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0110 08:52:22.267459       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0110 08:52:22.267413       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0110 08:52:22.267485       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0110 08:52:23.098804       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0110 08:52:23.098854       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0110 08:52:23.134819       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0110 08:52:23.134941       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0110 08:52:23.246804       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0110 08:52:23.246933       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0110 08:52:23.263980       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0110 08:52:23.264094       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0110 08:52:23.369069       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0110 08:52:23.369111       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0110 08:52:23.410580       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0110 08:52:23.410613       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0110 08:52:26.462658       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jan 10 08:52:37 old-k8s-version-093083 kubelet[1410]: I0110 08:52:37.380621    1410 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/6218f48d-6890-460d-ab63-f2b995735f05-cni-cfg\") pod \"kindnet-nn64b\" (UID: \"6218f48d-6890-460d-ab63-f2b995735f05\") " pod="kube-system/kindnet-nn64b"
	Jan 10 08:52:37 old-k8s-version-093083 kubelet[1410]: I0110 08:52:37.380658    1410 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6218f48d-6890-460d-ab63-f2b995735f05-lib-modules\") pod \"kindnet-nn64b\" (UID: \"6218f48d-6890-460d-ab63-f2b995735f05\") " pod="kube-system/kindnet-nn64b"
	Jan 10 08:52:37 old-k8s-version-093083 kubelet[1410]: I0110 08:52:37.380693    1410 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gnvh\" (UniqueName: \"kubernetes.io/projected/6218f48d-6890-460d-ab63-f2b995735f05-kube-api-access-4gnvh\") pod \"kindnet-nn64b\" (UID: \"6218f48d-6890-460d-ab63-f2b995735f05\") " pod="kube-system/kindnet-nn64b"
	Jan 10 08:52:37 old-k8s-version-093083 kubelet[1410]: I0110 08:52:37.380728    1410 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d00e6eb1-52e3-461c-a8fe-da333af236b0-lib-modules\") pod \"kube-proxy-r7qzb\" (UID: \"d00e6eb1-52e3-461c-a8fe-da333af236b0\") " pod="kube-system/kube-proxy-r7qzb"
	Jan 10 08:52:37 old-k8s-version-093083 kubelet[1410]: I0110 08:52:37.422116    1410 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Jan 10 08:52:37 old-k8s-version-093083 kubelet[1410]: I0110 08:52:37.423478    1410 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Jan 10 08:52:37 old-k8s-version-093083 kubelet[1410]: E0110 08:52:37.488125    1410 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jan 10 08:52:37 old-k8s-version-093083 kubelet[1410]: E0110 08:52:37.488175    1410 projected.go:198] Error preparing data for projected volume kube-api-access-7mzxh for pod kube-system/kube-proxy-r7qzb: configmap "kube-root-ca.crt" not found
	Jan 10 08:52:37 old-k8s-version-093083 kubelet[1410]: E0110 08:52:37.488131    1410 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Jan 10 08:52:37 old-k8s-version-093083 kubelet[1410]: E0110 08:52:37.488271    1410 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d00e6eb1-52e3-461c-a8fe-da333af236b0-kube-api-access-7mzxh podName:d00e6eb1-52e3-461c-a8fe-da333af236b0 nodeName:}" failed. No retries permitted until 2026-01-10 08:52:37.988231074 +0000 UTC m=+12.569920109 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-7mzxh" (UniqueName: "kubernetes.io/projected/d00e6eb1-52e3-461c-a8fe-da333af236b0-kube-api-access-7mzxh") pod "kube-proxy-r7qzb" (UID: "d00e6eb1-52e3-461c-a8fe-da333af236b0") : configmap "kube-root-ca.crt" not found
	Jan 10 08:52:37 old-k8s-version-093083 kubelet[1410]: E0110 08:52:37.488279    1410 projected.go:198] Error preparing data for projected volume kube-api-access-4gnvh for pod kube-system/kindnet-nn64b: configmap "kube-root-ca.crt" not found
	Jan 10 08:52:37 old-k8s-version-093083 kubelet[1410]: E0110 08:52:37.488360    1410 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6218f48d-6890-460d-ab63-f2b995735f05-kube-api-access-4gnvh podName:6218f48d-6890-460d-ab63-f2b995735f05 nodeName:}" failed. No retries permitted until 2026-01-10 08:52:37.988329964 +0000 UTC m=+12.570019002 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4gnvh" (UniqueName: "kubernetes.io/projected/6218f48d-6890-460d-ab63-f2b995735f05-kube-api-access-4gnvh") pod "kindnet-nn64b" (UID: "6218f48d-6890-460d-ab63-f2b995735f05") : configmap "kube-root-ca.crt" not found
	Jan 10 08:52:41 old-k8s-version-093083 kubelet[1410]: I0110 08:52:41.643685    1410 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-r7qzb" podStartSLOduration=4.643632022 podCreationTimestamp="2026-01-10 08:52:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 08:52:38.588088599 +0000 UTC m=+13.169777637" watchObservedRunningTime="2026-01-10 08:52:41.643632022 +0000 UTC m=+16.225321058"
	Jan 10 08:52:41 old-k8s-version-093083 kubelet[1410]: I0110 08:52:41.643950    1410 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-nn64b" podStartSLOduration=2.29304562 podCreationTimestamp="2026-01-10 08:52:37 +0000 UTC" firstStartedPulling="2026-01-10 08:52:38.253433317 +0000 UTC m=+12.835122344" lastFinishedPulling="2026-01-10 08:52:40.604294735 +0000 UTC m=+15.185983764" observedRunningTime="2026-01-10 08:52:41.643599427 +0000 UTC m=+16.225288477" watchObservedRunningTime="2026-01-10 08:52:41.64390704 +0000 UTC m=+16.225596077"
	Jan 10 08:52:51 old-k8s-version-093083 kubelet[1410]: I0110 08:52:51.384314    1410 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Jan 10 08:52:51 old-k8s-version-093083 kubelet[1410]: I0110 08:52:51.421869    1410 topology_manager.go:215] "Topology Admit Handler" podUID="66d5c03e-07c5-414e-a5a9-9c8f28c9144f" podNamespace="kube-system" podName="storage-provisioner"
	Jan 10 08:52:51 old-k8s-version-093083 kubelet[1410]: I0110 08:52:51.426815    1410 topology_manager.go:215] "Topology Admit Handler" podUID="15c17a90-3522-443a-a0b5-b9e103e66464" podNamespace="kube-system" podName="coredns-5dd5756b68-sscts"
	Jan 10 08:52:51 old-k8s-version-093083 kubelet[1410]: I0110 08:52:51.488026    1410 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/15c17a90-3522-443a-a0b5-b9e103e66464-config-volume\") pod \"coredns-5dd5756b68-sscts\" (UID: \"15c17a90-3522-443a-a0b5-b9e103e66464\") " pod="kube-system/coredns-5dd5756b68-sscts"
	Jan 10 08:52:51 old-k8s-version-093083 kubelet[1410]: I0110 08:52:51.488077    1410 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxsf9\" (UniqueName: \"kubernetes.io/projected/15c17a90-3522-443a-a0b5-b9e103e66464-kube-api-access-bxsf9\") pod \"coredns-5dd5756b68-sscts\" (UID: \"15c17a90-3522-443a-a0b5-b9e103e66464\") " pod="kube-system/coredns-5dd5756b68-sscts"
	Jan 10 08:52:51 old-k8s-version-093083 kubelet[1410]: I0110 08:52:51.488117    1410 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/66d5c03e-07c5-414e-a5a9-9c8f28c9144f-tmp\") pod \"storage-provisioner\" (UID: \"66d5c03e-07c5-414e-a5a9-9c8f28c9144f\") " pod="kube-system/storage-provisioner"
	Jan 10 08:52:51 old-k8s-version-093083 kubelet[1410]: I0110 08:52:51.488212    1410 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqlnl\" (UniqueName: \"kubernetes.io/projected/66d5c03e-07c5-414e-a5a9-9c8f28c9144f-kube-api-access-qqlnl\") pod \"storage-provisioner\" (UID: \"66d5c03e-07c5-414e-a5a9-9c8f28c9144f\") " pod="kube-system/storage-provisioner"
	Jan 10 08:52:52 old-k8s-version-093083 kubelet[1410]: I0110 08:52:52.624205    1410 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-sscts" podStartSLOduration=14.624152611 podCreationTimestamp="2026-01-10 08:52:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 08:52:52.613410674 +0000 UTC m=+27.195099713" watchObservedRunningTime="2026-01-10 08:52:52.624152611 +0000 UTC m=+27.205841650"
	Jan 10 08:52:52 old-k8s-version-093083 kubelet[1410]: I0110 08:52:52.639492    1410 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.639433566 podCreationTimestamp="2026-01-10 08:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 08:52:52.62424276 +0000 UTC m=+27.205931798" watchObservedRunningTime="2026-01-10 08:52:52.639433566 +0000 UTC m=+27.221122604"
	Jan 10 08:52:54 old-k8s-version-093083 kubelet[1410]: I0110 08:52:54.891549    1410 topology_manager.go:215] "Topology Admit Handler" podUID="79d1d319-c830-45c8-ae4c-0e12a1b99481" podNamespace="default" podName="busybox"
	Jan 10 08:52:54 old-k8s-version-093083 kubelet[1410]: I0110 08:52:54.913580    1410 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r8j4\" (UniqueName: \"kubernetes.io/projected/79d1d319-c830-45c8-ae4c-0e12a1b99481-kube-api-access-2r8j4\") pod \"busybox\" (UID: \"79d1d319-c830-45c8-ae4c-0e12a1b99481\") " pod="default/busybox"
	
	
	==> storage-provisioner [2e5c56f70c39ac3965e2c1a87487f7db201686464b0e044b928124c07301836d] <==
	I0110 08:52:51.794381       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 08:52:51.804422       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 08:52:51.804474       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0110 08:52:51.812583       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 08:52:51.812772       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-093083_a712ad84-b5f4-44c1-9749-37f97b3762d6!
	I0110 08:52:51.812925       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6d81b80b-b507-4754-870f-26841432edd7", APIVersion:"v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-093083_a712ad84-b5f4-44c1-9749-37f97b3762d6 became leader
	I0110 08:52:51.913419       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-093083_a712ad84-b5f4-44c1-9749-37f97b3762d6!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-093083 -n old-k8s-version-093083
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-093083 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (3.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-095312 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-095312 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (416.887051ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:53:05Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-095312 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-095312 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-095312 describe deploy/metrics-server -n kube-system: exit status 1 (78.741939ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-095312 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-095312
helpers_test.go:244: (dbg) docker inspect no-preload-095312:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b55d6d4fd1b202097df2085e713e014ab91340931a47b47e91685006d96552db",
	        "Created": "2026-01-10T08:52:10.613870109Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 290498,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T08:52:10.6473724Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e8ee619afcf8e9008c4fe3e7f2aba5fdbe7a9f0765053ca7dd53ab0df8ab02a5",
	        "ResolvConfPath": "/var/lib/docker/containers/b55d6d4fd1b202097df2085e713e014ab91340931a47b47e91685006d96552db/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b55d6d4fd1b202097df2085e713e014ab91340931a47b47e91685006d96552db/hostname",
	        "HostsPath": "/var/lib/docker/containers/b55d6d4fd1b202097df2085e713e014ab91340931a47b47e91685006d96552db/hosts",
	        "LogPath": "/var/lib/docker/containers/b55d6d4fd1b202097df2085e713e014ab91340931a47b47e91685006d96552db/b55d6d4fd1b202097df2085e713e014ab91340931a47b47e91685006d96552db-json.log",
	        "Name": "/no-preload-095312",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-095312:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-095312",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b55d6d4fd1b202097df2085e713e014ab91340931a47b47e91685006d96552db",
	                "LowerDir": "/var/lib/docker/overlay2/564ca6ef3c40a4c5b327dca6e24a3966439d268b426f5aa657f7c665c9e2702e-init/diff:/var/lib/docker/overlay2/97b14eb520192356c991915ce74cd6094e6ef948f398ac688b667ea18491ccde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/564ca6ef3c40a4c5b327dca6e24a3966439d268b426f5aa657f7c665c9e2702e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/564ca6ef3c40a4c5b327dca6e24a3966439d268b426f5aa657f7c665c9e2702e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/564ca6ef3c40a4c5b327dca6e24a3966439d268b426f5aa657f7c665c9e2702e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-095312",
	                "Source": "/var/lib/docker/volumes/no-preload-095312/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-095312",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-095312",
	                "name.minikube.sigs.k8s.io": "no-preload-095312",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "cde593b53f2507308dd4f4355400fc3df8e7864c40783cd9d01f82cec463bb9f",
	            "SandboxKey": "/var/run/docker/netns/cde593b53f25",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-095312": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6f7b848a3ceb686079b5c418ee8c05b8e3ebe9353b6cbb1033bc657f18ffab5a",
	                    "EndpointID": "573d9b69ebd334072e898754a3d0006589921ebe2508fc4cab58d320ca0014e9",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "8e:93:c2:d5:89:19",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-095312",
	                        "b55d6d4fd1b2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-095312 -n no-preload-095312
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-095312 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-095312 logs -n 25: (1.0081104s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p flannel-472660 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                  │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo cat /var/lib/kubelet/config.yaml                                                                                                                  │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo systemctl status docker --all --full --no-pager                                                                                                   │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │                     │
	│ ssh     │ -p flannel-472660 sudo systemctl cat docker --no-pager                                                                                                                   │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo cat /etc/docker/daemon.json                                                                                                                       │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │                     │
	│ ssh     │ -p flannel-472660 sudo docker system info                                                                                                                                │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │                     │
	│ ssh     │ -p flannel-472660 sudo systemctl status cri-docker --all --full --no-pager                                                                                               │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │                     │
	│ ssh     │ -p flannel-472660 sudo systemctl cat cri-docker --no-pager                                                                                                               │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                          │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │                     │
	│ ssh     │ -p flannel-472660 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                    │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo cri-dockerd --version                                                                                                                             │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo systemctl status containerd --all --full --no-pager                                                                                               │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │                     │
	│ ssh     │ -p flannel-472660 sudo systemctl cat containerd --no-pager                                                                                                               │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo cat /lib/systemd/system/containerd.service                                                                                                        │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo cat /etc/containerd/config.toml                                                                                                                   │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo containerd config dump                                                                                                                            │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo systemctl status crio --all --full --no-pager                                                                                                     │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo systemctl cat crio --no-pager                                                                                                                     │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                           │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo crio config                                                                                                                                       │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ delete  │ -p disable-driver-mounts-847921                                                                                                                                          │ disable-driver-mounts-847921 │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ start   │ -p default-k8s-diff-port-225354 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ default-k8s-diff-port-225354 │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-093083 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                             │ old-k8s-version-093083       │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-095312 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-095312            │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ stop    │ -p old-k8s-version-093083 --alsologtostderr -v=3                                                                                                                         │ old-k8s-version-093083       │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 08:53:00
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 08:53:00.232230  307694 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:53:00.232433  307694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:53:00.232441  307694 out.go:374] Setting ErrFile to fd 2...
	I0110 08:53:00.232446  307694 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:53:00.232655  307694 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:53:00.233181  307694 out.go:368] Setting JSON to false
	I0110 08:53:00.234330  307694 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2132,"bootTime":1768033048,"procs":316,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 08:53:00.234386  307694 start.go:143] virtualization: kvm guest
	I0110 08:53:00.236457  307694 out.go:179] * [default-k8s-diff-port-225354] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 08:53:00.237710  307694 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 08:53:00.237716  307694 notify.go:221] Checking for updates...
	I0110 08:53:00.240163  307694 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 08:53:00.241309  307694 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:53:00.242412  307694 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-3641/.minikube
	I0110 08:53:00.243549  307694 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 08:53:00.244661  307694 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 08:53:00.246308  307694 config.go:182] Loaded profile config "embed-certs-072273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:53:00.246396  307694 config.go:182] Loaded profile config "no-preload-095312": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:53:00.246463  307694 config.go:182] Loaded profile config "old-k8s-version-093083": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0110 08:53:00.246544  307694 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 08:53:00.272207  307694 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 08:53:00.272347  307694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:53:00.332314  307694 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2026-01-10 08:53:00.321137687 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:53:00.332467  307694 docker.go:319] overlay module found
	I0110 08:53:00.334482  307694 out.go:179] * Using the docker driver based on user configuration
	I0110 08:53:00.335645  307694 start.go:309] selected driver: docker
	I0110 08:53:00.335662  307694 start.go:928] validating driver "docker" against <nil>
	I0110 08:53:00.335672  307694 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 08:53:00.336237  307694 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:53:00.399865  307694 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2026-01-10 08:53:00.389564509 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:53:00.400051  307694 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 08:53:00.400296  307694 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 08:53:00.402176  307694 out.go:179] * Using Docker driver with root privileges
	I0110 08:53:00.403301  307694 cni.go:84] Creating CNI manager for ""
	I0110 08:53:00.403364  307694 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 08:53:00.403374  307694 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 08:53:00.403427  307694 start.go:353] cluster config:
	{Name:default-k8s-diff-port-225354 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-225354 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:53:00.404647  307694 out.go:179] * Starting "default-k8s-diff-port-225354" primary control-plane node in "default-k8s-diff-port-225354" cluster
	I0110 08:53:00.405946  307694 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 08:53:00.407140  307694 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 08:53:00.408253  307694 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 08:53:00.408281  307694 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0110 08:53:00.408299  307694 cache.go:65] Caching tarball of preloaded images
	I0110 08:53:00.408349  307694 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 08:53:00.408374  307694 preload.go:251] Found /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0110 08:53:00.408381  307694 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 08:53:00.408463  307694 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/default-k8s-diff-port-225354/config.json ...
	I0110 08:53:00.408484  307694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/default-k8s-diff-port-225354/config.json: {Name:mk766c10af2c7986d1fdfd4b0318d94a64b10e07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:53:00.428887  307694 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 08:53:00.428913  307694 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 08:53:00.428934  307694 cache.go:243] Successfully downloaded all kic artifacts
	I0110 08:53:00.428967  307694 start.go:360] acquireMachinesLock for default-k8s-diff-port-225354: {Name:mk6f4cf32f69b6a51f12f83adcd3cd0eb0ae8cbf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 08:53:00.429083  307694 start.go:364] duration metric: took 95.988µs to acquireMachinesLock for "default-k8s-diff-port-225354"
	I0110 08:53:00.429115  307694 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-225354 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-225354 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 08:53:00.429207  307694 start.go:125] createHost starting for "" (driver="docker")
	I0110 08:52:58.492303  299436 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0110 08:52:58.496554  299436 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I0110 08:52:58.496570  299436 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I0110 08:52:58.509553  299436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0110 08:52:58.721371  299436 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0110 08:52:58.721446  299436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:52:58.721475  299436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-072273 minikube.k8s.io/updated_at=2026_01_10T08_52_58_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee minikube.k8s.io/name=embed-certs-072273 minikube.k8s.io/primary=true
	I0110 08:52:58.733297  299436 ops.go:34] apiserver oom_adj: -16
	I0110 08:52:58.806656  299436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:52:59.306881  299436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:52:59.807675  299436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:53:00.306858  299436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:53:00.806961  299436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:53:01.306885  299436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:53:01.806763  299436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:53:02.307631  299436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:53:02.807657  299436 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:53:02.888947  299436 kubeadm.go:1114] duration metric: took 4.167561586s to wait for elevateKubeSystemPrivileges
	I0110 08:53:02.888995  299436 kubeadm.go:403] duration metric: took 11.874633956s to StartCluster
	I0110 08:53:02.889016  299436 settings.go:142] acquiring lock: {Name:mkbb32fc6441ceb31ce2923ea8999f8375298f08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:53:02.889100  299436 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:53:02.891136  299436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/kubeconfig: {Name:mk0d29b3b0ee1fd71729aff31f901be50a2d0664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:53:02.891405  299436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0110 08:53:02.891424  299436 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 08:53:02.891509  299436 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 08:53:02.891592  299436 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-072273"
	I0110 08:53:02.891611  299436 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-072273"
	I0110 08:53:02.891631  299436 config.go:182] Loaded profile config "embed-certs-072273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:53:02.891647  299436 addons.go:70] Setting default-storageclass=true in profile "embed-certs-072273"
	I0110 08:53:02.891660  299436 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-072273"
	I0110 08:53:02.891642  299436 host.go:66] Checking if "embed-certs-072273" exists ...
	I0110 08:53:02.892052  299436 cli_runner.go:164] Run: docker container inspect embed-certs-072273 --format={{.State.Status}}
	I0110 08:53:02.892196  299436 cli_runner.go:164] Run: docker container inspect embed-certs-072273 --format={{.State.Status}}
	I0110 08:53:02.893822  299436 out.go:179] * Verifying Kubernetes components...
	I0110 08:53:02.896106  299436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:53:02.924537  299436 addons.go:239] Setting addon default-storageclass=true in "embed-certs-072273"
	I0110 08:53:02.924573  299436 host.go:66] Checking if "embed-certs-072273" exists ...
	I0110 08:53:02.925313  299436 cli_runner.go:164] Run: docker container inspect embed-certs-072273 --format={{.State.Status}}
	I0110 08:53:02.925372  299436 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 08:53:02.927029  299436 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 08:53:02.927050  299436 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 08:53:02.927109  299436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-072273
	I0110 08:53:02.959487  299436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/embed-certs-072273/id_rsa Username:docker}
	I0110 08:53:02.969663  299436 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 08:53:02.969690  299436 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 08:53:02.969758  299436 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-072273
	I0110 08:53:03.003667  299436 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/embed-certs-072273/id_rsa Username:docker}
	I0110 08:53:03.012923  299436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0110 08:53:03.074102  299436 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 08:53:03.107682  299436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 08:53:03.153438  299436 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 08:53:03.445177  299436 start.go:987] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0110 08:53:03.447066  299436 node_ready.go:35] waiting up to 6m0s for node "embed-certs-072273" to be "Ready" ...
	I0110 08:53:04.134709  299436 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-072273" context rescaled to 1 replicas
	I0110 08:53:04.624085  299436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.470611449s)
	I0110 08:53:04.624108  299436 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.516391999s)
	I0110 08:53:04.778545  299436 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0110 08:53:00.431070  307694 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0110 08:53:00.431278  307694 start.go:159] libmachine.API.Create for "default-k8s-diff-port-225354" (driver="docker")
	I0110 08:53:00.431305  307694 client.go:173] LocalClient.Create starting
	I0110 08:53:00.431364  307694 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem
	I0110 08:53:00.431399  307694 main.go:144] libmachine: Decoding PEM data...
	I0110 08:53:00.431417  307694 main.go:144] libmachine: Parsing certificate...
	I0110 08:53:00.431468  307694 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22427-3641/.minikube/certs/cert.pem
	I0110 08:53:00.431489  307694 main.go:144] libmachine: Decoding PEM data...
	I0110 08:53:00.431501  307694 main.go:144] libmachine: Parsing certificate...
	I0110 08:53:00.431862  307694 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-225354 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0110 08:53:00.448977  307694 cli_runner.go:211] docker network inspect default-k8s-diff-port-225354 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0110 08:53:00.449104  307694 network_create.go:284] running [docker network inspect default-k8s-diff-port-225354] to gather additional debugging logs...
	I0110 08:53:00.449132  307694 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-225354
	W0110 08:53:00.466211  307694 cli_runner.go:211] docker network inspect default-k8s-diff-port-225354 returned with exit code 1
	I0110 08:53:00.466241  307694 network_create.go:287] error running [docker network inspect default-k8s-diff-port-225354]: docker network inspect default-k8s-diff-port-225354: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-225354 not found
	I0110 08:53:00.466254  307694 network_create.go:289] output of [docker network inspect default-k8s-diff-port-225354]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-225354 not found
	
	** /stderr **
	I0110 08:53:00.466378  307694 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 08:53:00.484544  307694 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9da35691088c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:0a:0c:fc:dc:fc:2f} reservation:<nil>}
	I0110 08:53:00.485233  307694 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-5ce9d5913249 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:3e:11:5d:21:c0:0b} reservation:<nil>}
	I0110 08:53:00.486056  307694 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-73a46a53fce2 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:be:e8:cf:3a:03:99} reservation:<nil>}
	I0110 08:53:00.486607  307694 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-6f7b848a3ceb IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:46:a4:d8:a1:5f:4e} reservation:<nil>}
	I0110 08:53:00.487472  307694 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f08b90}
	I0110 08:53:00.487502  307694 network_create.go:124] attempt to create docker network default-k8s-diff-port-225354 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0110 08:53:00.487555  307694 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-225354 default-k8s-diff-port-225354
	I0110 08:53:00.537456  307694 network_create.go:108] docker network default-k8s-diff-port-225354 192.168.85.0/24 created
	I0110 08:53:00.537488  307694 kic.go:121] calculated static IP "192.168.85.2" for the "default-k8s-diff-port-225354" container
	I0110 08:53:00.537540  307694 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0110 08:53:00.555875  307694 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-225354 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-225354 --label created_by.minikube.sigs.k8s.io=true
	I0110 08:53:00.574794  307694 oci.go:103] Successfully created a docker volume default-k8s-diff-port-225354
	I0110 08:53:00.574877  307694 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-225354-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-225354 --entrypoint /usr/bin/test -v default-k8s-diff-port-225354:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -d /var/lib
	I0110 08:53:00.991662  307694 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-225354
	I0110 08:53:00.991727  307694 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 08:53:00.991778  307694 kic.go:194] Starting extracting preloaded images to volume ...
	I0110 08:53:00.991852  307694 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-225354:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir
	I0110 08:53:05.002682  307694 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-225354:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 -I lz4 -xf /preloaded.tar -C /extractDir: (4.010784986s)
	I0110 08:53:05.002710  307694 kic.go:203] duration metric: took 4.010949857s to extract preloaded images to volume ...
	W0110 08:53:05.002946  307694 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0110 08:53:05.003000  307694 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0110 08:53:05.003046  307694 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0110 08:53:05.076918  307694 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-225354 --name default-k8s-diff-port-225354 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-225354 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-225354 --network default-k8s-diff-port-225354 --ip 192.168.85.2 --volume default-k8s-diff-port-225354:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773
	
	
	==> CRI-O <==
	Jan 10 08:52:54 no-preload-095312 crio[771]: time="2026-01-10T08:52:54.673370866Z" level=info msg="Starting container: 5c5fd605b0f9eca4a80243f69560167ea714d576f93679586d0f192ba1087e3d" id=92c4bd6f-fc99-4b86-b8d5-f19434e70b6d name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 08:52:54 no-preload-095312 crio[771]: time="2026-01-10T08:52:54.676058925Z" level=info msg="Started container" PID=2800 containerID=5c5fd605b0f9eca4a80243f69560167ea714d576f93679586d0f192ba1087e3d description=kube-system/coredns-7d764666f9-wpsnn/coredns id=92c4bd6f-fc99-4b86-b8d5-f19434e70b6d name=/runtime.v1.RuntimeService/StartContainer sandboxID=34cb7d4876a564ae2192ea1bb557723b3cb042f68c0f902d9cd93c2952c990ac
	Jan 10 08:52:57 no-preload-095312 crio[771]: time="2026-01-10T08:52:57.751211821Z" level=info msg="Running pod sandbox: default/busybox/POD" id=b98ea63a-1daa-45e4-8c2e-9fb0b43a094c name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 08:52:57 no-preload-095312 crio[771]: time="2026-01-10T08:52:57.751315514Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:52:57 no-preload-095312 crio[771]: time="2026-01-10T08:52:57.761661407Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:761c2666f44340228b4e93bd08217ea562bc92dfbd73112ed96f38de33e81123 UID:b48219ff-c748-4c50-bc09-518ec890a3b3 NetNS:/var/run/netns/9e87ae70-2937-46ff-bebd-faa75e149d85 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000b36bd8}] Aliases:map[]}"
	Jan 10 08:52:57 no-preload-095312 crio[771]: time="2026-01-10T08:52:57.761703967Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Jan 10 08:52:57 no-preload-095312 crio[771]: time="2026-01-10T08:52:57.797196958Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:761c2666f44340228b4e93bd08217ea562bc92dfbd73112ed96f38de33e81123 UID:b48219ff-c748-4c50-bc09-518ec890a3b3 NetNS:/var/run/netns/9e87ae70-2937-46ff-bebd-faa75e149d85 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000b36bd8}] Aliases:map[]}"
	Jan 10 08:52:57 no-preload-095312 crio[771]: time="2026-01-10T08:52:57.797430281Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Jan 10 08:52:57 no-preload-095312 crio[771]: time="2026-01-10T08:52:57.798627141Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Jan 10 08:52:57 no-preload-095312 crio[771]: time="2026-01-10T08:52:57.800249965Z" level=info msg="Ran pod sandbox 761c2666f44340228b4e93bd08217ea562bc92dfbd73112ed96f38de33e81123 with infra container: default/busybox/POD" id=b98ea63a-1daa-45e4-8c2e-9fb0b43a094c name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 08:52:57 no-preload-095312 crio[771]: time="2026-01-10T08:52:57.80190163Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=10413797-cd69-462c-a111-b1650b45782d name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:52:57 no-preload-095312 crio[771]: time="2026-01-10T08:52:57.802379297Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=10413797-cd69-462c-a111-b1650b45782d name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:52:57 no-preload-095312 crio[771]: time="2026-01-10T08:52:57.802595704Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=10413797-cd69-462c-a111-b1650b45782d name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:52:57 no-preload-095312 crio[771]: time="2026-01-10T08:52:57.804446151Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=82844eea-3416-4ff9-9079-99df2a7e12b8 name=/runtime.v1.ImageService/PullImage
	Jan 10 08:52:57 no-preload-095312 crio[771]: time="2026-01-10T08:52:57.805073051Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Jan 10 08:52:59 no-preload-095312 crio[771]: time="2026-01-10T08:52:59.051464895Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=82844eea-3416-4ff9-9079-99df2a7e12b8 name=/runtime.v1.ImageService/PullImage
	Jan 10 08:52:59 no-preload-095312 crio[771]: time="2026-01-10T08:52:59.052064518Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e99b6bc2-9f81-46df-a711-2f0f906b22da name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:52:59 no-preload-095312 crio[771]: time="2026-01-10T08:52:59.05370228Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2ff72bde-01f0-46d8-a132-d6078e899be6 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:52:59 no-preload-095312 crio[771]: time="2026-01-10T08:52:59.05729719Z" level=info msg="Creating container: default/busybox/busybox" id=d633bd50-70bb-4144-bc32-927cae19bb5c name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:52:59 no-preload-095312 crio[771]: time="2026-01-10T08:52:59.05741888Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:52:59 no-preload-095312 crio[771]: time="2026-01-10T08:52:59.061405439Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:52:59 no-preload-095312 crio[771]: time="2026-01-10T08:52:59.062061834Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:52:59 no-preload-095312 crio[771]: time="2026-01-10T08:52:59.088314062Z" level=info msg="Created container adf952410375cd7b03fbaec9fa4e0b01ea797ff13f3633c795466b7ab2277b36: default/busybox/busybox" id=d633bd50-70bb-4144-bc32-927cae19bb5c name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:52:59 no-preload-095312 crio[771]: time="2026-01-10T08:52:59.089108422Z" level=info msg="Starting container: adf952410375cd7b03fbaec9fa4e0b01ea797ff13f3633c795466b7ab2277b36" id=4cf28ea6-a6ba-4318-bb5e-3772af874eac name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 08:52:59 no-preload-095312 crio[771]: time="2026-01-10T08:52:59.090951211Z" level=info msg="Started container" PID=2881 containerID=adf952410375cd7b03fbaec9fa4e0b01ea797ff13f3633c795466b7ab2277b36 description=default/busybox/busybox id=4cf28ea6-a6ba-4318-bb5e-3772af874eac name=/runtime.v1.RuntimeService/StartContainer sandboxID=761c2666f44340228b4e93bd08217ea562bc92dfbd73112ed96f38de33e81123
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	adf952410375c       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   761c2666f4434       busybox                                     default
	5c5fd605b0f9e       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      12 seconds ago      Running             coredns                   0                   34cb7d4876a56       coredns-7d764666f9-wpsnn                    kube-system
	1fa681d6416e4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   adc481e31a9b9       storage-provisioner                         kube-system
	d0b92327188a0       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    23 seconds ago      Running             kindnet-cni               0                   06d82859eba18       kindnet-tzmwv                               kube-system
	2d082d5351c82       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                      27 seconds ago      Running             kube-proxy                0                   f0317ede499ff       kube-proxy-vrzf6                            kube-system
	089293ae562e5       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                      38 seconds ago      Running             kube-scheduler            0                   f6dc83bbdc94c       kube-scheduler-no-preload-095312            kube-system
	e2d7e603789b2       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                      38 seconds ago      Running             kube-apiserver            0                   28220b74692e9       kube-apiserver-no-preload-095312            kube-system
	63f11cc21d3c0       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                      38 seconds ago      Running             etcd                      0                   bc910d6c8a4e1       etcd-no-preload-095312                      kube-system
	f77cecbb9aa99       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                      38 seconds ago      Running             kube-controller-manager   0                   93cee6ebc0d67       kube-controller-manager-no-preload-095312   kube-system
	
	
	==> coredns [5c5fd605b0f9eca4a80243f69560167ea714d576f93679586d0f192ba1087e3d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:55251 - 26854 "HINFO IN 4867311537372119622.2049956620907253571. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.045557304s
	
	
	==> describe nodes <==
	Name:               no-preload-095312
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-095312
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee
	                    minikube.k8s.io/name=no-preload-095312
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T08_52_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 08:52:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-095312
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 08:53:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 08:53:04 +0000   Sat, 10 Jan 2026 08:52:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 08:53:04 +0000   Sat, 10 Jan 2026 08:52:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 08:53:04 +0000   Sat, 10 Jan 2026 08:52:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 08:53:04 +0000   Sat, 10 Jan 2026 08:52:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-095312
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 73d0d7ed7bc783a051b563196960b035
	  System UUID:                b2e47543-110f-4155-be9c-62c4fc9e6c69
	  Boot ID:                    7c12f492-59b6-440b-986c-741ece916a23
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-7d764666f9-wpsnn                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-no-preload-095312                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         34s
	  kube-system                 kindnet-tzmwv                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-no-preload-095312             250m (3%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-no-preload-095312    200m (2%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-vrzf6                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-no-preload-095312             100m (1%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  30s   node-controller  Node no-preload-095312 event: Registered Node no-preload-095312 in Controller
	
	
	==> dmesg <==
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 4a 03 46 88 66 4e 08 06
	[  +6.406566] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7a 4f c9 a1 45 f4 08 06
	[  +0.001429] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca 5b 5b a9 7c eb 08 06
	[ +24.002020] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7a 6f 49 7e 9d d1 08 06
	[  +0.000366] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e 3a ac 18 98 c9 08 06
	[Jan10 08:52] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4a 05 5c 86 cf df 08 06
	[  +0.000337] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 7a 4f c9 a1 45 f4 08 06
	[  +0.000602] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca 5b 5b a9 7c eb 08 06
	[  +0.739059] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e 32 81 dc 0a 7c 08 06
	[  +1.988700] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 6e ec e4 aa 3d 08 06
	[ +20.040962] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 ac f8 cf fb 84 08 06
	[  +0.000375] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 6e ec e4 aa 3d 08 06
	[  +0.000915] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 32 81 dc 0a 7c 08 06
	
	
	==> etcd [63f11cc21d3c0d58558724760da6cead1863244e9f9f1150e9de9e70bf3d68ba] <==
	{"level":"info","ts":"2026-01-10T08:52:28.986681Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 1"}
	{"level":"info","ts":"2026-01-10T08:52:28.986690Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T08:52:28.986704Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 2"}
	{"level":"info","ts":"2026-01-10T08:52:28.987474Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T08:52:28.987502Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T08:52:28.987520Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 2"}
	{"level":"info","ts":"2026-01-10T08:52:28.987529Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T08:52:28.988162Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T08:52:28.988563Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:no-preload-095312 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T08:52:28.988570Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T08:52:28.988593Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T08:52:28.988758Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T08:52:28.988813Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T08:52:28.988941Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T08:52:28.989034Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T08:52:28.989072Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T08:52:28.989099Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-10T08:52:28.989203Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2026-01-10T08:52:28.989854Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T08:52:28.989925Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T08:52:28.993867Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T08:52:28.993998Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"warn","ts":"2026-01-10T08:52:31.369513Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.149782ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638357898111928073 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:discovery\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:discovery\" value_size:587 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2026-01-10T08:52:31.369651Z","caller":"traceutil/trace.go:172","msg":"trace[1359042511] transaction","detail":"{read_only:false; response_revision:71; number_of_response:1; }","duration":"161.250805ms","start":"2026-01-10T08:52:31.208372Z","end":"2026-01-10T08:52:31.369623Z","steps":["trace[1359042511] 'process raft request'  (duration: 60.56416ms)","trace[1359042511] 'compare'  (duration: 99.99985ms)"],"step_count":2}
	{"level":"info","ts":"2026-01-10T08:53:04.341459Z","caller":"traceutil/trace.go:172","msg":"trace[2031961967] transaction","detail":"{read_only:false; response_revision:445; number_of_response:1; }","duration":"109.814738ms","start":"2026-01-10T08:53:04.231623Z","end":"2026-01-10T08:53:04.341437Z","steps":["trace[2031961967] 'process raft request'  (duration: 109.653536ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:53:07 up 35 min,  0 user,  load average: 8.50, 4.37, 2.61
	Linux no-preload-095312 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d0b92327188a0813c14eb8f665ebe0119f3d35b2d59f8258ec0140f75e5fda48] <==
	I0110 08:52:43.626507       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 08:52:43.719983       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0110 08:52:43.720167       1 main.go:148] setting mtu 1500 for CNI 
	I0110 08:52:43.720199       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 08:52:43.720232       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T08:52:43Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 08:52:43.928429       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 08:52:43.928469       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 08:52:43.928484       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 08:52:43.929639       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 08:52:44.206422       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 08:52:44.206454       1 metrics.go:72] Registering metrics
	I0110 08:52:44.206555       1 controller.go:711] "Syncing nftables rules"
	I0110 08:52:53.927847       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 08:52:53.927930       1 main.go:301] handling current node
	I0110 08:53:03.930836       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 08:53:03.930896       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e2d7e603789b2d498e6576259375b4a2f4f6f63955d7dfb5327705ebbfbda1a3] <==
	E0110 08:52:30.176469       1 controller.go:201] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E0110 08:52:30.214875       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I0110 08:52:30.262500       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 08:52:30.265989       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 08:52:30.266032       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I0110 08:52:30.273722       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 08:52:30.386313       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 08:52:31.067054       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I0110 08:52:31.140344       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I0110 08:52:31.140380       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 08:52:32.172156       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 08:52:32.213087       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 08:52:32.269623       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0110 08:52:32.275669       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I0110 08:52:32.276659       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 08:52:32.280995       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 08:52:33.084008       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 08:52:33.516330       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 08:52:33.527194       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0110 08:52:33.535085       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0110 08:52:38.792848       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 08:52:38.800440       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 08:52:39.045401       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 08:52:39.107140       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E0110 08:53:05.574819       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:45176: use of closed network connection
	
	
	==> kube-controller-manager [f77cecbb9aa99a476569802b096289a85c26a3718f45501e318793490516dab9] <==
	I0110 08:52:37.892809       1 shared_informer.go:377] "Caches are synced"
	I0110 08:52:37.892809       1 shared_informer.go:377] "Caches are synced"
	I0110 08:52:37.892842       1 shared_informer.go:377] "Caches are synced"
	I0110 08:52:37.892811       1 shared_informer.go:377] "Caches are synced"
	I0110 08:52:37.892876       1 shared_informer.go:377] "Caches are synced"
	I0110 08:52:37.892814       1 shared_informer.go:377] "Caches are synced"
	I0110 08:52:37.892832       1 shared_informer.go:377] "Caches are synced"
	I0110 08:52:37.893244       1 shared_informer.go:377] "Caches are synced"
	I0110 08:52:37.893347       1 shared_informer.go:377] "Caches are synced"
	I0110 08:52:37.893432       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I0110 08:52:37.893522       1 shared_informer.go:377] "Caches are synced"
	I0110 08:52:37.893621       1 shared_informer.go:377] "Caches are synced"
	I0110 08:52:37.893502       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-095312"
	I0110 08:52:37.894291       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0110 08:52:37.896200       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:52:37.897485       1 shared_informer.go:377] "Caches are synced"
	I0110 08:52:37.897574       1 shared_informer.go:377] "Caches are synced"
	I0110 08:52:37.903422       1 shared_informer.go:377] "Caches are synced"
	I0110 08:52:37.903453       1 shared_informer.go:377] "Caches are synced"
	I0110 08:52:37.903539       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-095312" podCIDRs=["10.244.0.0/24"]
	I0110 08:52:37.993814       1 shared_informer.go:377] "Caches are synced"
	I0110 08:52:37.993901       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 08:52:37.993911       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 08:52:37.996987       1 shared_informer.go:377] "Caches are synced"
	I0110 08:52:57.897179       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [2d082d5351c82cd79aef9ee21010efc27fe9ee4cbc0fe14b0ee4b080d23eacd3] <==
	I0110 08:52:39.688558       1 server_linux.go:53] "Using iptables proxy"
	I0110 08:52:39.777090       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:52:39.878954       1 shared_informer.go:377] "Caches are synced"
	I0110 08:52:39.879000       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0110 08:52:39.879101       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 08:52:39.928249       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 08:52:39.928417       1 server_linux.go:136] "Using iptables Proxier"
	I0110 08:52:39.939489       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 08:52:39.945223       1 server.go:529] "Version info" version="v1.35.0"
	I0110 08:52:39.945263       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 08:52:39.948822       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 08:52:39.948844       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 08:52:39.948933       1 config.go:200] "Starting service config controller"
	I0110 08:52:39.948942       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 08:52:39.948959       1 config.go:106] "Starting endpoint slice config controller"
	I0110 08:52:39.948965       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 08:52:39.949860       1 config.go:309] "Starting node config controller"
	I0110 08:52:39.951968       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 08:52:39.952048       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 08:52:40.049983       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0110 08:52:40.050009       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 08:52:40.050038       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [089293ae562e5f652bdc4ecc43a458e98bc6f775770695d97edeee24ec9f32f9] <==
	E0110 08:52:30.138090       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0110 08:52:30.138601       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0110 08:52:30.138627       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0110 08:52:30.138688       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0110 08:52:30.138718       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 08:52:30.985659       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0110 08:52:30.986818       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0110 08:52:31.038098       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0110 08:52:31.061689       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0110 08:52:31.094057       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 08:52:31.204499       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0110 08:52:31.205425       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0110 08:52:31.338031       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0110 08:52:31.341186       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0110 08:52:31.349659       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0110 08:52:31.358173       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0110 08:52:31.396618       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0110 08:52:31.442872       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E0110 08:52:31.499408       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0110 08:52:31.552885       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0110 08:52:31.612429       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0110 08:52:31.651353       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0110 08:52:31.701994       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0110 08:52:31.713165       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	I0110 08:52:33.230313       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 08:52:39 no-preload-095312 kubelet[2184]: I0110 08:52:39.296051    2184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4b5b359b-7da6-4ebd-9847-d9770cf6a877-cni-cfg\") pod \"kindnet-tzmwv\" (UID: \"4b5b359b-7da6-4ebd-9847-d9770cf6a877\") " pod="kube-system/kindnet-tzmwv"
	Jan 10 08:52:39 no-preload-095312 kubelet[2184]: I0110 08:52:39.296074    2184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krgsw\" (UniqueName: \"kubernetes.io/projected/4b5b359b-7da6-4ebd-9847-d9770cf6a877-kube-api-access-krgsw\") pod \"kindnet-tzmwv\" (UID: \"4b5b359b-7da6-4ebd-9847-d9770cf6a877\") " pod="kube-system/kindnet-tzmwv"
	Jan 10 08:52:39 no-preload-095312 kubelet[2184]: I0110 08:52:39.296099    2184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8699c9be-7906-4f12-bb70-06198b5dab20-lib-modules\") pod \"kube-proxy-vrzf6\" (UID: \"8699c9be-7906-4f12-bb70-06198b5dab20\") " pod="kube-system/kube-proxy-vrzf6"
	Jan 10 08:52:39 no-preload-095312 kubelet[2184]: I0110 08:52:39.296127    2184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4b5b359b-7da6-4ebd-9847-d9770cf6a877-xtables-lock\") pod \"kindnet-tzmwv\" (UID: \"4b5b359b-7da6-4ebd-9847-d9770cf6a877\") " pod="kube-system/kindnet-tzmwv"
	Jan 10 08:52:39 no-preload-095312 kubelet[2184]: I0110 08:52:39.296146    2184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4b5b359b-7da6-4ebd-9847-d9770cf6a877-lib-modules\") pod \"kindnet-tzmwv\" (UID: \"4b5b359b-7da6-4ebd-9847-d9770cf6a877\") " pod="kube-system/kindnet-tzmwv"
	Jan 10 08:52:39 no-preload-095312 kubelet[2184]: I0110 08:52:39.296172    2184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8699c9be-7906-4f12-bb70-06198b5dab20-xtables-lock\") pod \"kube-proxy-vrzf6\" (UID: \"8699c9be-7906-4f12-bb70-06198b5dab20\") " pod="kube-system/kube-proxy-vrzf6"
	Jan 10 08:52:39 no-preload-095312 kubelet[2184]: E0110 08:52:39.316112    2184 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-095312" containerName="kube-controller-manager"
	Jan 10 08:52:44 no-preload-095312 kubelet[2184]: I0110 08:52:44.432311    2184 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-vrzf6" podStartSLOduration=5.432293642 podStartE2EDuration="5.432293642s" podCreationTimestamp="2026-01-10 08:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 08:52:40.428910361 +0000 UTC m=+7.152554117" watchObservedRunningTime="2026-01-10 08:52:44.432293642 +0000 UTC m=+11.155937398"
	Jan 10 08:52:46 no-preload-095312 kubelet[2184]: E0110 08:52:46.737931    2184 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-095312" containerName="kube-apiserver"
	Jan 10 08:52:46 no-preload-095312 kubelet[2184]: I0110 08:52:46.748359    2184 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-tzmwv" podStartSLOduration=4.03853496 podStartE2EDuration="7.748338798s" podCreationTimestamp="2026-01-10 08:52:39 +0000 UTC" firstStartedPulling="2026-01-10 08:52:39.524875581 +0000 UTC m=+6.248519332" lastFinishedPulling="2026-01-10 08:52:43.234679421 +0000 UTC m=+9.958323170" observedRunningTime="2026-01-10 08:52:44.432983462 +0000 UTC m=+11.156627214" watchObservedRunningTime="2026-01-10 08:52:46.748338798 +0000 UTC m=+13.471982554"
	Jan 10 08:52:47 no-preload-095312 kubelet[2184]: E0110 08:52:47.924300    2184 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-095312" containerName="kube-scheduler"
	Jan 10 08:52:48 no-preload-095312 kubelet[2184]: E0110 08:52:48.927815    2184 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-095312" containerName="etcd"
	Jan 10 08:52:49 no-preload-095312 kubelet[2184]: E0110 08:52:49.325080    2184 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-095312" containerName="kube-controller-manager"
	Jan 10 08:52:54 no-preload-095312 kubelet[2184]: I0110 08:52:54.246515    2184 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Jan 10 08:52:54 no-preload-095312 kubelet[2184]: I0110 08:52:54.305132    2184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c46af89a-96de-457d-8dd1-a952953a3a6b-tmp\") pod \"storage-provisioner\" (UID: \"c46af89a-96de-457d-8dd1-a952953a3a6b\") " pod="kube-system/storage-provisioner"
	Jan 10 08:52:54 no-preload-095312 kubelet[2184]: I0110 08:52:54.305187    2184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trbqp\" (UniqueName: \"kubernetes.io/projected/c46af89a-96de-457d-8dd1-a952953a3a6b-kube-api-access-trbqp\") pod \"storage-provisioner\" (UID: \"c46af89a-96de-457d-8dd1-a952953a3a6b\") " pod="kube-system/storage-provisioner"
	Jan 10 08:52:54 no-preload-095312 kubelet[2184]: I0110 08:52:54.305227    2184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5aa4b05d-86f1-4ec4-bc9d-e5b649d5e7a2-config-volume\") pod \"coredns-7d764666f9-wpsnn\" (UID: \"5aa4b05d-86f1-4ec4-bc9d-e5b649d5e7a2\") " pod="kube-system/coredns-7d764666f9-wpsnn"
	Jan 10 08:52:54 no-preload-095312 kubelet[2184]: I0110 08:52:54.305251    2184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7v2z\" (UniqueName: \"kubernetes.io/projected/5aa4b05d-86f1-4ec4-bc9d-e5b649d5e7a2-kube-api-access-f7v2z\") pod \"coredns-7d764666f9-wpsnn\" (UID: \"5aa4b05d-86f1-4ec4-bc9d-e5b649d5e7a2\") " pod="kube-system/coredns-7d764666f9-wpsnn"
	Jan 10 08:52:55 no-preload-095312 kubelet[2184]: E0110 08:52:55.447509    2184 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wpsnn" containerName="coredns"
	Jan 10 08:52:55 no-preload-095312 kubelet[2184]: I0110 08:52:55.460623    2184 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-wpsnn" podStartSLOduration=16.460600856 podStartE2EDuration="16.460600856s" podCreationTimestamp="2026-01-10 08:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 08:52:55.460197838 +0000 UTC m=+22.183841595" watchObservedRunningTime="2026-01-10 08:52:55.460600856 +0000 UTC m=+22.184244611"
	Jan 10 08:52:55 no-preload-095312 kubelet[2184]: I0110 08:52:55.472226    2184 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.472207342 podStartE2EDuration="16.472207342s" podCreationTimestamp="2026-01-10 08:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 08:52:55.472148484 +0000 UTC m=+22.195792240" watchObservedRunningTime="2026-01-10 08:52:55.472207342 +0000 UTC m=+22.195851097"
	Jan 10 08:52:56 no-preload-095312 kubelet[2184]: E0110 08:52:56.452051    2184 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wpsnn" containerName="coredns"
	Jan 10 08:52:57 no-preload-095312 kubelet[2184]: E0110 08:52:57.457209    2184 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wpsnn" containerName="coredns"
	Jan 10 08:52:57 no-preload-095312 kubelet[2184]: I0110 08:52:57.525557    2184 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7zmq\" (UniqueName: \"kubernetes.io/projected/b48219ff-c748-4c50-bc09-518ec890a3b3-kube-api-access-k7zmq\") pod \"busybox\" (UID: \"b48219ff-c748-4c50-bc09-518ec890a3b3\") " pod="default/busybox"
	Jan 10 08:52:59 no-preload-095312 kubelet[2184]: I0110 08:52:59.471917    2184 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.222681216 podStartE2EDuration="2.47189934s" podCreationTimestamp="2026-01-10 08:52:57 +0000 UTC" firstStartedPulling="2026-01-10 08:52:57.803710022 +0000 UTC m=+24.527353760" lastFinishedPulling="2026-01-10 08:52:59.052928131 +0000 UTC m=+25.776571884" observedRunningTime="2026-01-10 08:52:59.47183541 +0000 UTC m=+26.195479166" watchObservedRunningTime="2026-01-10 08:52:59.47189934 +0000 UTC m=+26.195543096"
	
	
	==> storage-provisioner [1fa681d6416e4d20cf4f998f2ea9cb688ca24e417bc32c00d519dca25cb6f33b] <==
	I0110 08:52:54.679727       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 08:52:54.691299       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 08:52:54.691362       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0110 08:52:54.694371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:52:54.703049       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 08:52:54.703261       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 08:52:54.703450       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-095312_953ab091-2f2a-4abc-a6ee-c5d2b9a6d50d!
	I0110 08:52:54.711952       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2d7883ff-6c30-48ff-9e3a-f260577e9c48", APIVersion:"v1", ResourceVersion:"418", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-095312_953ab091-2f2a-4abc-a6ee-c5d2b9a6d50d became leader
	W0110 08:52:54.712621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:52:54.719317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 08:52:54.803549       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-095312_953ab091-2f2a-4abc-a6ee-c5d2b9a6d50d!
	W0110 08:52:56.723356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:52:56.728835       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:52:58.732566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:52:58.737017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:53:00.741104       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:53:00.746330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:53:02.750329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:53:02.754332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:53:04.758086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:53:04.779012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:53:06.782232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:53:06.786891       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-095312 -n no-preload-095312
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-095312 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-072273 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-072273 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (245.333311ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:53:27Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-072273 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-072273 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-072273 describe deploy/metrics-server -n kube-system: exit status 1 (61.759081ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-072273 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-072273
helpers_test.go:244: (dbg) docker inspect embed-certs-072273:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "55ee49e3eee1e82dd39a34aa05ca74ac8290e62f4e2e2be1df6922e52b7bc344",
	        "Created": "2026-01-10T08:52:43.607439204Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 300973,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T08:52:43.647191816Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e8ee619afcf8e9008c4fe3e7f2aba5fdbe7a9f0765053ca7dd53ab0df8ab02a5",
	        "ResolvConfPath": "/var/lib/docker/containers/55ee49e3eee1e82dd39a34aa05ca74ac8290e62f4e2e2be1df6922e52b7bc344/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/55ee49e3eee1e82dd39a34aa05ca74ac8290e62f4e2e2be1df6922e52b7bc344/hostname",
	        "HostsPath": "/var/lib/docker/containers/55ee49e3eee1e82dd39a34aa05ca74ac8290e62f4e2e2be1df6922e52b7bc344/hosts",
	        "LogPath": "/var/lib/docker/containers/55ee49e3eee1e82dd39a34aa05ca74ac8290e62f4e2e2be1df6922e52b7bc344/55ee49e3eee1e82dd39a34aa05ca74ac8290e62f4e2e2be1df6922e52b7bc344-json.log",
	        "Name": "/embed-certs-072273",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-072273:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-072273",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "55ee49e3eee1e82dd39a34aa05ca74ac8290e62f4e2e2be1df6922e52b7bc344",
	                "LowerDir": "/var/lib/docker/overlay2/56524a28931c04c257d4895fd7efe2b53022251486e86a9149ff74604d9ab63e-init/diff:/var/lib/docker/overlay2/97b14eb520192356c991915ce74cd6094e6ef948f398ac688b667ea18491ccde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/56524a28931c04c257d4895fd7efe2b53022251486e86a9149ff74604d9ab63e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/56524a28931c04c257d4895fd7efe2b53022251486e86a9149ff74604d9ab63e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/56524a28931c04c257d4895fd7efe2b53022251486e86a9149ff74604d9ab63e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-072273",
	                "Source": "/var/lib/docker/volumes/embed-certs-072273/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-072273",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-072273",
	                "name.minikube.sigs.k8s.io": "embed-certs-072273",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "9bde778f60837ca79932403a55f51a614346135af22bb45be83205e94b3f27b1",
	            "SandboxKey": "/var/run/docker/netns/9bde778f6083",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-072273": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5339a54148e7314a379bb4609318a80f780708af6dca5aa937db0b5ad6eef145",
	                    "EndpointID": "1faf2661ca01eb48335777cdf5bc9da61edb5af011bb95410d2f40cb3dc0f644",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "46:da:67:89:be:93",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-072273",
	                        "55ee49e3eee1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-072273 -n embed-certs-072273
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-072273 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-072273 logs -n 25: (1.104701815s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p flannel-472660 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                    │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │                     │
	│ ssh     │ -p flannel-472660 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                    │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                               │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │                     │
	│ ssh     │ -p flannel-472660 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                         │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo cri-dockerd --version                                                                                                                                                                                                  │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                    │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │                     │
	│ ssh     │ -p flannel-472660 sudo systemctl cat containerd --no-pager                                                                                                                                                                                    │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                             │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo cat /etc/containerd/config.toml                                                                                                                                                                                        │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo containerd config dump                                                                                                                                                                                                 │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                          │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo systemctl cat crio --no-pager                                                                                                                                                                                          │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo crio config                                                                                                                                                                                                            │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ delete  │ -p disable-driver-mounts-847921                                                                                                                                                                                                               │ disable-driver-mounts-847921 │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ start   │ -p default-k8s-diff-port-225354 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-225354 │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-093083 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-093083       │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-095312 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-095312            │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ stop    │ -p old-k8s-version-093083 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-093083       │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ stop    │ -p no-preload-095312 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-095312            │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-093083 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-093083       │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ start   │ -p old-k8s-version-093083 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-093083       │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-095312 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-095312            │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ start   │ -p no-preload-095312 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-095312            │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-072273 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-072273           │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 08:53:24
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 08:53:24.456235  313874 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:53:24.456347  313874 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:53:24.456359  313874 out.go:374] Setting ErrFile to fd 2...
	I0110 08:53:24.456366  313874 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:53:24.456603  313874 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:53:24.457209  313874 out.go:368] Setting JSON to false
	I0110 08:53:24.458498  313874 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2156,"bootTime":1768033048,"procs":318,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 08:53:24.458574  313874 start.go:143] virtualization: kvm guest
	I0110 08:53:24.460795  313874 out.go:179] * [no-preload-095312] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 08:53:24.462689  313874 notify.go:221] Checking for updates...
	I0110 08:53:24.462718  313874 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 08:53:24.463684  313874 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 08:53:24.465430  313874 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:53:24.466679  313874 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-3641/.minikube
	I0110 08:53:24.471289  313874 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 08:53:24.472653  313874 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 08:53:24.474834  313874 config.go:182] Loaded profile config "no-preload-095312": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:53:24.475338  313874 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 08:53:24.498798  313874 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 08:53:24.498943  313874 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:53:24.560213  313874 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2026-01-10 08:53:24.546788462 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:53:24.560320  313874 docker.go:319] overlay module found
	I0110 08:53:24.561875  313874 out.go:179] * Using the docker driver based on existing profile
	I0110 08:53:24.563316  313874 start.go:309] selected driver: docker
	I0110 08:53:24.563337  313874 start.go:928] validating driver "docker" against &{Name:no-preload-095312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-095312 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:53:24.563461  313874 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 08:53:24.564252  313874 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:53:24.625237  313874 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2026-01-10 08:53:24.615902599 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:53:24.625576  313874 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 08:53:24.625613  313874 cni.go:84] Creating CNI manager for ""
	I0110 08:53:24.625684  313874 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 08:53:24.625746  313874 start.go:353] cluster config:
	{Name:no-preload-095312 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:no-preload-095312 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:53:24.627959  313874 out.go:179] * Starting "no-preload-095312" primary control-plane node in "no-preload-095312" cluster
	I0110 08:53:24.628927  313874 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 08:53:24.629997  313874 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 08:53:24.631005  313874 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 08:53:24.631050  313874 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 08:53:24.631109  313874 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/no-preload-095312/config.json ...
	I0110 08:53:24.631239  313874 cache.go:107] acquiring lock: {Name:mk0ffdc100d5be1fc488fad795f657e173093b66 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 08:53:24.631241  313874 cache.go:107] acquiring lock: {Name:mk0bd2627d2a0098a4c92842cda8c861c24e38b5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 08:53:24.631314  313874 cache.go:107] acquiring lock: {Name:mk20f94405e81bb9745877d83be965e58390abbf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 08:53:24.631241  313874 cache.go:107] acquiring lock: {Name:mkd8699b5b4022d6c2e0c8d1db7f0fbeac5b1044 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 08:53:24.631355  313874 cache.go:107] acquiring lock: {Name:mka520837385568a24621b8913c7e5a70d7e8393 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 08:53:24.631335  313874 cache.go:107] acquiring lock: {Name:mkccada845bdbe0fb8f9389dbc5b1b26a72873c8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 08:53:24.631383  313874 cache.go:115] /home/jenkins/minikube-integration/22427-3641/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0110 08:53:24.631349  313874 cache.go:107] acquiring lock: {Name:mk19bef5c01d9ea8cdb2d202f57b2d81454ff8a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 08:53:24.631399  313874 cache.go:115] /home/jenkins/minikube-integration/22427-3641/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 exists
	I0110 08:53:24.631401  313874 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22427-3641/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 181.903µs
	I0110 08:53:24.631390  313874 cache.go:107] acquiring lock: {Name:mkeddb2b952d747e36ab20b3c5783661efca72ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 08:53:24.631412  313874 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22427-3641/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0110 08:53:24.631409  313874 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0" -> "/home/jenkins/minikube-integration/22427-3641/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0" took 189.479µs
	I0110 08:53:24.631413  313874 cache.go:115] /home/jenkins/minikube-integration/22427-3641/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 exists
	I0110 08:53:24.631419  313874 cache.go:115] /home/jenkins/minikube-integration/22427-3641/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I0110 08:53:24.631421  313874 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0 -> /home/jenkins/minikube-integration/22427-3641/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0 succeeded
	I0110 08:53:24.631391  313874 cache.go:115] /home/jenkins/minikube-integration/22427-3641/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 exists
	I0110 08:53:24.631425  313874 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0" -> "/home/jenkins/minikube-integration/22427-3641/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0" took 206.492µs
	I0110 08:53:24.631474  313874 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0 -> /home/jenkins/minikube-integration/22427-3641/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0 succeeded
	I0110 08:53:24.631428  313874 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22427-3641/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 82.159µs
	I0110 08:53:24.631487  313874 cache.go:115] /home/jenkins/minikube-integration/22427-3641/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 exists
	I0110 08:53:24.631490  313874 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22427-3641/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I0110 08:53:24.631487  313874 cache.go:115] /home/jenkins/minikube-integration/22427-3641/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 exists
	I0110 08:53:24.631451  313874 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0" -> "/home/jenkins/minikube-integration/22427-3641/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0" took 139.208µs
	I0110 08:53:24.631500  313874 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0 -> /home/jenkins/minikube-integration/22427-3641/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0 succeeded
	I0110 08:53:24.631504  313874 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0" -> "/home/jenkins/minikube-integration/22427-3641/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0" took 225.766µs
	I0110 08:53:24.631528  313874 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0 -> /home/jenkins/minikube-integration/22427-3641/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0 succeeded
	I0110 08:53:24.631513  313874 cache.go:115] /home/jenkins/minikube-integration/22427-3641/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I0110 08:53:24.631549  313874 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22427-3641/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 208.638µs
	I0110 08:53:24.631560  313874 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22427-3641/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I0110 08:53:24.631579  313874 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22427-3641/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0" took 277.145µs
	I0110 08:53:24.631607  313874 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22427-3641/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I0110 08:53:24.631624  313874 cache.go:87] Successfully saved all images to host disk.
	I0110 08:53:24.652042  313874 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 08:53:24.652063  313874 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 08:53:24.652081  313874 cache.go:243] Successfully downloaded all kic artifacts
	I0110 08:53:24.652115  313874 start.go:360] acquireMachinesLock for no-preload-095312: {Name:mka6d7d87120b87744f31a2bd7a652cc71ae5a81 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 08:53:24.652180  313874 start.go:364] duration metric: took 46.914µs to acquireMachinesLock for "no-preload-095312"
	I0110 08:53:24.652202  313874 start.go:96] Skipping create...Using existing machine configuration
	I0110 08:53:24.652211  313874 fix.go:54] fixHost starting: 
	I0110 08:53:24.652490  313874 cli_runner.go:164] Run: docker container inspect no-preload-095312 --format={{.State.Status}}
	I0110 08:53:24.675108  313874 fix.go:112] recreateIfNeeded on no-preload-095312: state=Stopped err=<nil>
	W0110 08:53:24.675138  313874 fix.go:138] unexpected machine state, will restart: <nil>
	I0110 08:53:20.406255  307694 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0110 08:53:20.406310  307694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:53:20.406376  307694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-225354 minikube.k8s.io/updated_at=2026_01_10T08_53_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee minikube.k8s.io/name=default-k8s-diff-port-225354 minikube.k8s.io/primary=true
	I0110 08:53:20.415642  307694 ops.go:34] apiserver oom_adj: -16
	I0110 08:53:20.508471  307694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:53:21.008848  307694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:53:21.509467  307694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:53:22.008950  307694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:53:22.509493  307694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:53:23.008671  307694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:53:23.508929  307694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:53:24.008880  307694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:53:24.508927  307694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:53:25.008601  307694 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:53:25.080945  307694 kubeadm.go:1114] duration metric: took 4.674676988s to wait for elevateKubeSystemPrivileges
	I0110 08:53:25.081023  307694 kubeadm.go:403] duration metric: took 12.740377983s to StartCluster
	I0110 08:53:25.081057  307694 settings.go:142] acquiring lock: {Name:mkbb32fc6441ceb31ce2923ea8999f8375298f08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:53:25.081129  307694 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:53:25.086488  307694 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/kubeconfig: {Name:mk0d29b3b0ee1fd71729aff31f901be50a2d0664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:53:25.086928  307694 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0110 08:53:25.087506  307694 config.go:182] Loaded profile config "default-k8s-diff-port-225354": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:53:25.088537  307694 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 08:53:25.088577  307694 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 08:53:25.088920  307694 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-225354"
	I0110 08:53:25.088945  307694 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-225354"
	I0110 08:53:25.088978  307694 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-225354"
	I0110 08:53:25.088996  307694 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-225354"
	I0110 08:53:25.089395  307694 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225354 --format={{.State.Status}}
	I0110 08:53:25.089592  307694 host.go:66] Checking if "default-k8s-diff-port-225354" exists ...
	I0110 08:53:25.090085  307694 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225354 --format={{.State.Status}}
	I0110 08:53:25.091841  307694 out.go:179] * Verifying Kubernetes components...
	I0110 08:53:25.093064  307694 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:53:25.117608  307694 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-225354"
	I0110 08:53:25.117661  307694 host.go:66] Checking if "default-k8s-diff-port-225354" exists ...
	I0110 08:53:25.117711  307694 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 08:53:25.118186  307694 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225354 --format={{.State.Status}}
	I0110 08:53:25.121957  307694 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 08:53:25.121980  307694 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 08:53:25.122036  307694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225354
	I0110 08:53:25.148938  307694 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 08:53:25.148963  307694 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 08:53:25.149198  307694 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225354
	I0110 08:53:25.158386  307694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/default-k8s-diff-port-225354/id_rsa Username:docker}
	I0110 08:53:25.180312  307694 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/default-k8s-diff-port-225354/id_rsa Username:docker}
	I0110 08:53:25.204043  307694 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0110 08:53:25.255900  307694 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 08:53:25.287568  307694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 08:53:25.298537  307694 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 08:53:25.391488  307694 start.go:987] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0110 08:53:25.392909  307694 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-225354" to be "Ready" ...
	I0110 08:53:25.645912  307694 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0110 08:53:22.908722  313146 out.go:252] * Restarting existing docker container for "old-k8s-version-093083" ...
	I0110 08:53:22.908806  313146 cli_runner.go:164] Run: docker start old-k8s-version-093083
	I0110 08:53:23.170562  313146 cli_runner.go:164] Run: docker container inspect old-k8s-version-093083 --format={{.State.Status}}
	I0110 08:53:23.191020  313146 kic.go:430] container "old-k8s-version-093083" state is running.
	I0110 08:53:23.191390  313146 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-093083
	I0110 08:53:23.210318  313146 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/old-k8s-version-093083/config.json ...
	I0110 08:53:23.210556  313146 machine.go:94] provisionDockerMachine start ...
	I0110 08:53:23.210637  313146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-093083
	I0110 08:53:23.229651  313146 main.go:144] libmachine: Using SSH client type: native
	I0110 08:53:23.229989  313146 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I0110 08:53:23.230004  313146 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 08:53:23.230639  313146 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54290->127.0.0.1:33108: read: connection reset by peer
	I0110 08:53:26.366079  313146 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-093083
	
	I0110 08:53:26.366115  313146 ubuntu.go:182] provisioning hostname "old-k8s-version-093083"
	I0110 08:53:26.366184  313146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-093083
	I0110 08:53:26.384604  313146 main.go:144] libmachine: Using SSH client type: native
	I0110 08:53:26.384902  313146 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I0110 08:53:26.384922  313146 main.go:144] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-093083 && echo "old-k8s-version-093083" | sudo tee /etc/hostname
	I0110 08:53:26.525924  313146 main.go:144] libmachine: SSH cmd err, output: <nil>: old-k8s-version-093083
	
	I0110 08:53:26.526005  313146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-093083
	I0110 08:53:26.546902  313146 main.go:144] libmachine: Using SSH client type: native
	I0110 08:53:26.547211  313146 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I0110 08:53:26.547234  313146 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-093083' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-093083/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-093083' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 08:53:26.682805  313146 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 08:53:26.682843  313146 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-3641/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-3641/.minikube}
	I0110 08:53:26.682870  313146 ubuntu.go:190] setting up certificates
	I0110 08:53:26.682900  313146 provision.go:84] configureAuth start
	I0110 08:53:26.682966  313146 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-093083
	I0110 08:53:26.702458  313146 provision.go:143] copyHostCerts
	I0110 08:53:26.702519  313146 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-3641/.minikube/ca.pem, removing ...
	I0110 08:53:26.702530  313146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-3641/.minikube/ca.pem
	I0110 08:53:26.702589  313146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-3641/.minikube/ca.pem (1078 bytes)
	I0110 08:53:26.702691  313146 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-3641/.minikube/cert.pem, removing ...
	I0110 08:53:26.702700  313146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-3641/.minikube/cert.pem
	I0110 08:53:26.702729  313146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-3641/.minikube/cert.pem (1123 bytes)
	I0110 08:53:26.702830  313146 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-3641/.minikube/key.pem, removing ...
	I0110 08:53:26.702839  313146 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-3641/.minikube/key.pem
	I0110 08:53:26.702865  313146 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-3641/.minikube/key.pem (1675 bytes)
	I0110 08:53:26.702929  313146 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-3641/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-093083 san=[127.0.0.1 192.168.94.2 localhost minikube old-k8s-version-093083]
	I0110 08:53:26.788505  313146 provision.go:177] copyRemoteCerts
	I0110 08:53:26.788571  313146 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 08:53:26.788615  313146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-093083
	I0110 08:53:26.807271  313146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/old-k8s-version-093083/id_rsa Username:docker}
	I0110 08:53:26.902217  313146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0110 08:53:26.920591  313146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0110 08:53:26.939240  313146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 08:53:26.956864  313146 provision.go:87] duration metric: took 273.942911ms to configureAuth
	I0110 08:53:26.956895  313146 ubuntu.go:206] setting minikube options for container-runtime
	I0110 08:53:26.957084  313146 config.go:182] Loaded profile config "old-k8s-version-093083": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0110 08:53:26.957197  313146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-093083
	I0110 08:53:26.975350  313146 main.go:144] libmachine: Using SSH client type: native
	I0110 08:53:26.975544  313146 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33108 <nil> <nil>}
	I0110 08:53:26.975558  313146 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 08:53:27.304193  313146 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 08:53:27.304223  313146 machine.go:97] duration metric: took 4.093646918s to provisionDockerMachine
	I0110 08:53:27.304243  313146 start.go:293] postStartSetup for "old-k8s-version-093083" (driver="docker")
	I0110 08:53:27.304300  313146 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 08:53:27.304388  313146 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 08:53:27.304477  313146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-093083
	I0110 08:53:27.327224  313146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/old-k8s-version-093083/id_rsa Username:docker}
	I0110 08:53:27.426846  313146 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 08:53:27.431082  313146 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 08:53:27.431113  313146 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 08:53:27.431126  313146 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-3641/.minikube/addons for local assets ...
	I0110 08:53:27.431171  313146 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-3641/.minikube/files for local assets ...
	I0110 08:53:27.431249  313146 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-3641/.minikube/files/etc/ssl/certs/71832.pem -> 71832.pem in /etc/ssl/certs
	I0110 08:53:27.431375  313146 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 08:53:27.440134  313146 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/files/etc/ssl/certs/71832.pem --> /etc/ssl/certs/71832.pem (1708 bytes)
	I0110 08:53:27.462624  313146 start.go:296] duration metric: took 158.366219ms for postStartSetup
	I0110 08:53:27.462701  313146 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 08:53:27.462803  313146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-093083
	I0110 08:53:27.483223  313146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/old-k8s-version-093083/id_rsa Username:docker}
	I0110 08:53:27.576795  313146 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 08:53:27.581359  313146 fix.go:56] duration metric: took 4.692117117s for fixHost
	I0110 08:53:27.581392  313146 start.go:83] releasing machines lock for "old-k8s-version-093083", held for 4.69217462s
	I0110 08:53:27.581457  313146 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-093083
	I0110 08:53:27.599261  313146 ssh_runner.go:195] Run: cat /version.json
	I0110 08:53:27.599308  313146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-093083
	I0110 08:53:27.599350  313146 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 08:53:27.599456  313146 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-093083
	I0110 08:53:27.620196  313146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/old-k8s-version-093083/id_rsa Username:docker}
	I0110 08:53:27.620548  313146 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/old-k8s-version-093083/id_rsa Username:docker}
	
	
	==> CRI-O <==
	Jan 10 08:53:17 embed-certs-072273 crio[777]: time="2026-01-10T08:53:17.684279921Z" level=info msg="Starting container: 36a96bc09ab1edc4d4c97629b8901ebdbe48e73e887bc8eebf7cd53cd907dc64" id=7122cb6a-606b-4c7a-bd49-a1c7306ad1a1 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 08:53:17 embed-certs-072273 crio[777]: time="2026-01-10T08:53:17.686420533Z" level=info msg="Started container" PID=1876 containerID=36a96bc09ab1edc4d4c97629b8901ebdbe48e73e887bc8eebf7cd53cd907dc64 description=kube-system/coredns-7d764666f9-ss4nt/coredns id=7122cb6a-606b-4c7a-bd49-a1c7306ad1a1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a12a2e32cd0ad3ada67da48d0c61540a1dd6c78dc33f6be28b43025716c32797
	Jan 10 08:53:20 embed-certs-072273 crio[777]: time="2026-01-10T08:53:20.282239838Z" level=info msg="Running pod sandbox: default/busybox/POD" id=32ac4726-3edb-45b7-9f58-66b2e0a5ee3e name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 08:53:20 embed-certs-072273 crio[777]: time="2026-01-10T08:53:20.282334964Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:53:20 embed-certs-072273 crio[777]: time="2026-01-10T08:53:20.28785891Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:31e606a79ae09046cbb4e17dc98108b259f03a433c1eb24e1107e21472c7880a UID:87bb5117-4f07-448e-bd80-5c13abfe1ede NetNS:/var/run/netns/44c80b3f-95f7-49ed-a5c3-904eaca07d86 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0010144a0}] Aliases:map[]}"
	Jan 10 08:53:20 embed-certs-072273 crio[777]: time="2026-01-10T08:53:20.287898827Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Jan 10 08:53:20 embed-certs-072273 crio[777]: time="2026-01-10T08:53:20.306270032Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:31e606a79ae09046cbb4e17dc98108b259f03a433c1eb24e1107e21472c7880a UID:87bb5117-4f07-448e-bd80-5c13abfe1ede NetNS:/var/run/netns/44c80b3f-95f7-49ed-a5c3-904eaca07d86 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0010144a0}] Aliases:map[]}"
	Jan 10 08:53:20 embed-certs-072273 crio[777]: time="2026-01-10T08:53:20.306414816Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Jan 10 08:53:20 embed-certs-072273 crio[777]: time="2026-01-10T08:53:20.307395592Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Jan 10 08:53:20 embed-certs-072273 crio[777]: time="2026-01-10T08:53:20.308151891Z" level=info msg="Ran pod sandbox 31e606a79ae09046cbb4e17dc98108b259f03a433c1eb24e1107e21472c7880a with infra container: default/busybox/POD" id=32ac4726-3edb-45b7-9f58-66b2e0a5ee3e name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 08:53:20 embed-certs-072273 crio[777]: time="2026-01-10T08:53:20.309500587Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c49bdc32-c93b-47ba-a8f7-7fc03f6bd5bf name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:53:20 embed-certs-072273 crio[777]: time="2026-01-10T08:53:20.309639517Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=c49bdc32-c93b-47ba-a8f7-7fc03f6bd5bf name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:53:20 embed-certs-072273 crio[777]: time="2026-01-10T08:53:20.309703017Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=c49bdc32-c93b-47ba-a8f7-7fc03f6bd5bf name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:53:20 embed-certs-072273 crio[777]: time="2026-01-10T08:53:20.310463301Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9c09766f-aa0d-4efa-9622-2e880796fec5 name=/runtime.v1.ImageService/PullImage
	Jan 10 08:53:20 embed-certs-072273 crio[777]: time="2026-01-10T08:53:20.31081236Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Jan 10 08:53:21 embed-certs-072273 crio[777]: time="2026-01-10T08:53:21.456162115Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=9c09766f-aa0d-4efa-9622-2e880796fec5 name=/runtime.v1.ImageService/PullImage
	Jan 10 08:53:21 embed-certs-072273 crio[777]: time="2026-01-10T08:53:21.456747605Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1a3e0306-66a1-4bad-822f-d5d7683e0457 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:53:21 embed-certs-072273 crio[777]: time="2026-01-10T08:53:21.45825479Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2763537b-6bcb-487e-9b51-8f3f4c21254e name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:53:21 embed-certs-072273 crio[777]: time="2026-01-10T08:53:21.461535096Z" level=info msg="Creating container: default/busybox/busybox" id=fcb975bd-49f3-43d2-bf83-26be9db0ab71 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:53:21 embed-certs-072273 crio[777]: time="2026-01-10T08:53:21.46165831Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:53:21 embed-certs-072273 crio[777]: time="2026-01-10T08:53:21.465111082Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:53:21 embed-certs-072273 crio[777]: time="2026-01-10T08:53:21.465530932Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:53:21 embed-certs-072273 crio[777]: time="2026-01-10T08:53:21.492675246Z" level=info msg="Created container 2b522ffa48521a02a057abf957453d5e328efdf0c0bd953f9fe41214b1ed0109: default/busybox/busybox" id=fcb975bd-49f3-43d2-bf83-26be9db0ab71 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:53:21 embed-certs-072273 crio[777]: time="2026-01-10T08:53:21.493326686Z" level=info msg="Starting container: 2b522ffa48521a02a057abf957453d5e328efdf0c0bd953f9fe41214b1ed0109" id=c3dd511e-68b4-4dbf-a1cf-8160cccb4cc9 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 08:53:21 embed-certs-072273 crio[777]: time="2026-01-10T08:53:21.495090018Z" level=info msg="Started container" PID=1957 containerID=2b522ffa48521a02a057abf957453d5e328efdf0c0bd953f9fe41214b1ed0109 description=default/busybox/busybox id=c3dd511e-68b4-4dbf-a1cf-8160cccb4cc9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=31e606a79ae09046cbb4e17dc98108b259f03a433c1eb24e1107e21472c7880a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	2b522ffa48521       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   6 seconds ago       Running             busybox                   0                   31e606a79ae09       busybox                                      default
	36a96bc09ab1e       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      10 seconds ago      Running             coredns                   0                   a12a2e32cd0ad       coredns-7d764666f9-ss4nt                     kube-system
	24a830ab71761       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 seconds ago      Running             storage-provisioner       0                   d04cc7e5882b1       storage-provisioner                          kube-system
	b12cd1bae7802       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    22 seconds ago      Running             kindnet-cni               0                   c2b2d021a366d       kindnet-svs4f                                kube-system
	cd6aab7d7eedc       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                      24 seconds ago      Running             kube-proxy                0                   94a9b01767d2d       kube-proxy-sndfh                             kube-system
	b014b3ab937cc       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                      35 seconds ago      Running             etcd                      0                   edfbe69afa949       etcd-embed-certs-072273                      kube-system
	eedaf38fdc1ae       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                      35 seconds ago      Running             kube-scheduler            0                   e9a9e48314b71       kube-scheduler-embed-certs-072273            kube-system
	173cc2ab2cc5f       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                      35 seconds ago      Running             kube-controller-manager   0                   5b1fed45f163b       kube-controller-manager-embed-certs-072273   kube-system
	cc5b2397bfe92       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                      35 seconds ago      Running             kube-apiserver            0                   4ba46f7025696       kube-apiserver-embed-certs-072273            kube-system
	
	
	==> coredns [36a96bc09ab1edc4d4c97629b8901ebdbe48e73e887bc8eebf7cd53cd907dc64] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:36938 - 17842 "HINFO IN 8203586233572324897.2461486078968020975. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.03320299s
	
	
	==> describe nodes <==
	Name:               embed-certs-072273
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-072273
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee
	                    minikube.k8s.io/name=embed-certs-072273
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T08_52_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 08:52:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-072273
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 08:53:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 08:53:17 +0000   Sat, 10 Jan 2026 08:52:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 08:53:17 +0000   Sat, 10 Jan 2026 08:52:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 08:53:17 +0000   Sat, 10 Jan 2026 08:52:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 08:53:17 +0000   Sat, 10 Jan 2026 08:53:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-072273
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 73d0d7ed7bc783a051b563196960b035
	  System UUID:                296a745e-68fc-4733-bca6-ba83ff3ab707
	  Boot ID:                    7c12f492-59b6-440b-986c-741ece916a23
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-7d764666f9-ss4nt                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-embed-certs-072273                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-svs4f                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-embed-certs-072273             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-embed-certs-072273    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-sndfh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-embed-certs-072273             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  27s   node-controller  Node embed-certs-072273 event: Registered Node embed-certs-072273 in Controller
	
	
	==> dmesg <==
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 4a 03 46 88 66 4e 08 06
	[  +6.406566] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7a 4f c9 a1 45 f4 08 06
	[  +0.001429] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca 5b 5b a9 7c eb 08 06
	[ +24.002020] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7a 6f 49 7e 9d d1 08 06
	[  +0.000366] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e 3a ac 18 98 c9 08 06
	[Jan10 08:52] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4a 05 5c 86 cf df 08 06
	[  +0.000337] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 7a 4f c9 a1 45 f4 08 06
	[  +0.000602] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca 5b 5b a9 7c eb 08 06
	[  +0.739059] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e 32 81 dc 0a 7c 08 06
	[  +1.988700] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 6e ec e4 aa 3d 08 06
	[ +20.040962] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 ac f8 cf fb 84 08 06
	[  +0.000375] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 6e ec e4 aa 3d 08 06
	[  +0.000915] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 32 81 dc 0a 7c 08 06
	
	
	==> etcd [b014b3ab937cc697ee6d483e96d4f37ff8bff6bc9a1dd06131f871470ef03d5b] <==
	{"level":"info","ts":"2026-01-10T08:53:03.631356Z","caller":"traceutil/trace.go:172","msg":"trace[1881813521] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/storage-provisioner; range_end:; response_count:0; response_revision:344; }","duration":"151.548988ms","start":"2026-01-10T08:53:03.479790Z","end":"2026-01-10T08:53:03.631339Z","steps":["trace[1881813521] 'agreement among raft nodes before linearized reading'  (duration: 122.758657ms)","trace[1881813521] 'range keys from in-memory index tree'  (duration: 28.625587ms)"],"step_count":2}
	{"level":"info","ts":"2026-01-10T08:53:03.645211Z","caller":"traceutil/trace.go:172","msg":"trace[1329234748] transaction","detail":"{read_only:false; response_revision:349; number_of_response:1; }","duration":"116.418925ms","start":"2026-01-10T08:53:03.528780Z","end":"2026-01-10T08:53:03.645199Z","steps":["trace[1329234748] 'process raft request'  (duration: 116.385915ms)"],"step_count":1}
	{"level":"info","ts":"2026-01-10T08:53:03.645252Z","caller":"traceutil/trace.go:172","msg":"trace[563429048] transaction","detail":"{read_only:false; response_revision:348; number_of_response:1; }","duration":"117.245079ms","start":"2026-01-10T08:53:03.527973Z","end":"2026-01-10T08:53:03.645218Z","steps":["trace[563429048] 'process raft request'  (duration: 117.143856ms)"],"step_count":1}
	{"level":"info","ts":"2026-01-10T08:53:03.645190Z","caller":"traceutil/trace.go:172","msg":"trace[1249655828] transaction","detail":"{read_only:false; response_revision:346; number_of_response:1; }","duration":"118.832806ms","start":"2026-01-10T08:53:03.526331Z","end":"2026-01-10T08:53:03.645164Z","steps":["trace[1249655828] 'process raft request'  (duration: 118.668168ms)"],"step_count":1}
	{"level":"info","ts":"2026-01-10T08:53:03.645207Z","caller":"traceutil/trace.go:172","msg":"trace[684534380] transaction","detail":"{read_only:false; response_revision:347; number_of_response:1; }","duration":"118.27866ms","start":"2026-01-10T08:53:03.526911Z","end":"2026-01-10T08:53:03.645190Z","steps":["trace[684534380] 'process raft request'  (duration: 118.171616ms)"],"step_count":1}
	{"level":"warn","ts":"2026-01-10T08:53:03.927909Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.947479ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:4390"}
	{"level":"info","ts":"2026-01-10T08:53:03.927985Z","caller":"traceutil/trace.go:172","msg":"trace[2084889672] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:355; }","duration":"127.041599ms","start":"2026-01-10T08:53:03.800927Z","end":"2026-01-10T08:53:03.927969Z","steps":["trace[2084889672] 'agreement among raft nodes before linearized reading'  (duration: 97.304621ms)","trace[2084889672] 'range keys from in-memory index tree'  (duration: 29.596104ms)"],"step_count":2}
	{"level":"info","ts":"2026-01-10T08:53:03.928036Z","caller":"traceutil/trace.go:172","msg":"trace[1037585104] transaction","detail":"{read_only:false; response_revision:357; number_of_response:1; }","duration":"127.578707ms","start":"2026-01-10T08:53:03.800444Z","end":"2026-01-10T08:53:03.928023Z","steps":["trace[1037585104] 'process raft request'  (duration: 127.532258ms)"],"step_count":1}
	{"level":"info","ts":"2026-01-10T08:53:03.928140Z","caller":"traceutil/trace.go:172","msg":"trace[721870436] transaction","detail":"{read_only:false; response_revision:356; number_of_response:1; }","duration":"129.055295ms","start":"2026-01-10T08:53:03.799057Z","end":"2026-01-10T08:53:03.928112Z","steps":["trace[721870436] 'process raft request'  (duration: 99.235684ms)","trace[721870436] 'compare'  (duration: 29.565003ms)"],"step_count":2}
	{"level":"info","ts":"2026-01-10T08:53:04.051293Z","caller":"traceutil/trace.go:172","msg":"trace[1069103238] linearizableReadLoop","detail":"{readStateIndex:372; appliedIndex:372; }","duration":"120.864393ms","start":"2026-01-10T08:53:03.930404Z","end":"2026-01-10T08:53:04.051268Z","steps":["trace[1069103238] 'read index received'  (duration: 120.853988ms)","trace[1069103238] 'applied index is now lower than readState.Index'  (duration: 8.603µs)"],"step_count":2}
	{"level":"warn","ts":"2026-01-10T08:53:04.064115Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.684859ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/rolebindings/kube-system/system:persistent-volume-provisioner\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2026-01-10T08:53:04.064198Z","caller":"traceutil/trace.go:172","msg":"trace[224998403] range","detail":"{range_begin:/registry/rolebindings/kube-system/system:persistent-volume-provisioner; range_end:; response_count:0; response_revision:357; }","duration":"133.786113ms","start":"2026-01-10T08:53:03.930399Z","end":"2026-01-10T08:53:04.064185Z","steps":["trace[224998403] 'agreement among raft nodes before linearized reading'  (duration: 120.966256ms)"],"step_count":1}
	{"level":"info","ts":"2026-01-10T08:53:04.064199Z","caller":"traceutil/trace.go:172","msg":"trace[222705981] transaction","detail":"{read_only:false; response_revision:358; number_of_response:1; }","duration":"140.201919ms","start":"2026-01-10T08:53:03.923982Z","end":"2026-01-10T08:53:04.064184Z","steps":["trace[222705981] 'process raft request'  (duration: 127.334313ms)","trace[222705981] 'compare'  (duration: 12.748173ms)"],"step_count":2}
	{"level":"warn","ts":"2026-01-10T08:53:04.132578Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"183.664614ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-072273\" limit:1 ","response":"range_response_count:1 size:5590"}
	{"level":"info","ts":"2026-01-10T08:53:04.132650Z","caller":"traceutil/trace.go:172","msg":"trace[561643533] range","detail":"{range_begin:/registry/minions/embed-certs-072273; range_end:; response_count:1; response_revision:358; }","duration":"183.754147ms","start":"2026-01-10T08:53:03.948882Z","end":"2026-01-10T08:53:04.132636Z","steps":["trace[561643533] 'agreement among raft nodes before linearized reading'  (duration: 183.478204ms)"],"step_count":1}
	{"level":"info","ts":"2026-01-10T08:53:04.132590Z","caller":"traceutil/trace.go:172","msg":"trace[1997083605] transaction","detail":"{read_only:false; number_of_response:1; response_revision:358; }","duration":"203.069746ms","start":"2026-01-10T08:53:03.929506Z","end":"2026-01-10T08:53:04.132576Z","steps":["trace[1997083605] 'process raft request'  (duration: 202.846897ms)"],"step_count":1}
	{"level":"warn","ts":"2026-01-10T08:53:04.132689Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"183.779348ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:4390"}
	{"level":"info","ts":"2026-01-10T08:53:04.132722Z","caller":"traceutil/trace.go:172","msg":"trace[1833152548] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:358; }","duration":"183.823007ms","start":"2026-01-10T08:53:03.948892Z","end":"2026-01-10T08:53:04.132715Z","steps":["trace[1833152548] 'agreement among raft nodes before linearized reading'  (duration: 183.453194ms)"],"step_count":1}
	{"level":"info","ts":"2026-01-10T08:53:04.132581Z","caller":"traceutil/trace.go:172","msg":"trace[1070302676] transaction","detail":"{read_only:false; response_revision:359; number_of_response:1; }","duration":"197.522176ms","start":"2026-01-10T08:53:03.935040Z","end":"2026-01-10T08:53:04.132562Z","steps":["trace[1070302676] 'process raft request'  (duration: 197.438109ms)"],"step_count":1}
	{"level":"info","ts":"2026-01-10T08:53:04.340641Z","caller":"traceutil/trace.go:172","msg":"trace[347093768] transaction","detail":"{read_only:false; response_revision:362; number_of_response:1; }","duration":"179.099008ms","start":"2026-01-10T08:53:04.161520Z","end":"2026-01-10T08:53:04.340619Z","steps":["trace[347093768] 'process raft request'  (duration: 126.729539ms)","trace[347093768] 'compare'  (duration: 52.249078ms)"],"step_count":2}
	{"level":"info","ts":"2026-01-10T08:53:04.341388Z","caller":"traceutil/trace.go:172","msg":"trace[900946109] transaction","detail":"{read_only:false; response_revision:365; number_of_response:1; }","duration":"168.317508ms","start":"2026-01-10T08:53:04.173058Z","end":"2026-01-10T08:53:04.341375Z","steps":["trace[900946109] 'process raft request'  (duration: 168.262783ms)"],"step_count":1}
	{"level":"info","ts":"2026-01-10T08:53:04.341442Z","caller":"traceutil/trace.go:172","msg":"trace[1042107846] transaction","detail":"{read_only:false; response_revision:363; number_of_response:1; }","duration":"179.09146ms","start":"2026-01-10T08:53:04.162338Z","end":"2026-01-10T08:53:04.341429Z","steps":["trace[1042107846] 'process raft request'  (duration: 178.877557ms)"],"step_count":1}
	{"level":"info","ts":"2026-01-10T08:53:04.341620Z","caller":"traceutil/trace.go:172","msg":"trace[1046125998] transaction","detail":"{read_only:false; response_revision:364; number_of_response:1; }","duration":"170.170144ms","start":"2026-01-10T08:53:04.171442Z","end":"2026-01-10T08:53:04.341612Z","steps":["trace[1046125998] 'process raft request'  (duration: 169.844562ms)"],"step_count":1}
	{"level":"info","ts":"2026-01-10T08:53:04.500089Z","caller":"traceutil/trace.go:172","msg":"trace[259626424] transaction","detail":"{read_only:false; response_revision:368; number_of_response:1; }","duration":"112.004371ms","start":"2026-01-10T08:53:04.388062Z","end":"2026-01-10T08:53:04.500066Z","steps":["trace[259626424] 'process raft request'  (duration: 62.260668ms)","trace[259626424] 'compare'  (duration: 49.587796ms)"],"step_count":2}
	{"level":"info","ts":"2026-01-10T08:53:04.615441Z","caller":"traceutil/trace.go:172","msg":"trace[109832147] transaction","detail":"{read_only:false; response_revision:370; number_of_response:1; }","duration":"109.162991ms","start":"2026-01-10T08:53:04.506245Z","end":"2026-01-10T08:53:04.615408Z","steps":["trace[109832147] 'process raft request'  (duration: 108.545878ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:53:28 up 36 min,  0 user,  load average: 6.56, 4.20, 2.60
	Linux embed-certs-072273 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b12cd1bae78025eab40c7c30b6e5f8f378bcd144f0795c3d5a296f2cdc93dc80] <==
	I0110 08:53:06.670188       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 08:53:06.670529       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I0110 08:53:06.670761       1 main.go:148] setting mtu 1500 for CNI 
	I0110 08:53:06.670839       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 08:53:06.670870       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T08:53:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 08:53:06.884936       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 08:53:06.884968       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 08:53:06.884980       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 08:53:06.885396       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 08:53:07.185512       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 08:53:07.185537       1 metrics.go:72] Registering metrics
	I0110 08:53:07.185597       1 controller.go:711] "Syncing nftables rules"
	I0110 08:53:16.884701       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0110 08:53:16.884819       1 main.go:301] handling current node
	I0110 08:53:26.884056       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0110 08:53:26.884116       1 main.go:301] handling current node
	
	
	==> kube-apiserver [cc5b2397bfe9257127f258ef47281b11bd6dae654a67ccf861fec1ca3ec21b82] <==
	I0110 08:52:55.132968       1 policy_source.go:248] refreshing policies
	E0110 08:52:55.155036       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I0110 08:52:55.203507       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 08:52:55.206235       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I0110 08:52:55.206417       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 08:52:55.213424       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 08:52:55.308395       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 08:52:56.005853       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I0110 08:52:56.009729       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I0110 08:52:56.009761       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 08:52:56.520720       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 08:52:56.565944       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 08:52:56.711406       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0110 08:52:56.718636       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I0110 08:52:56.719884       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 08:52:56.724702       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 08:52:57.030859       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 08:52:57.891096       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 08:52:57.901611       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0110 08:52:57.908484       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0110 08:53:02.784405       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 08:53:02.788784       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 08:53:02.883414       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 08:53:03.035596       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E0110 08:53:27.069891       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:48228: use of closed network connection
	
	
	==> kube-controller-manager [173cc2ab2cc5f3030026cf9bd00435f0b6b3683c3627264bc904121cd27a8bfb] <==
	I0110 08:53:01.838594       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:01.838667       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:01.838912       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:01.838937       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:01.839231       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I0110 08:53:01.839362       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:01.839462       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:01.839488       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:01.839508       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:01.839567       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:01.839801       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:01.840546       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:01.839361       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="embed-certs-072273"
	I0110 08:53:01.840779       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0110 08:53:01.842090       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:01.845772       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:53:01.847944       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:01.852155       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:01.852274       1 range_allocator.go:433] "Set node PodCIDR" node="embed-certs-072273" podCIDRs=["10.244.0.0/24"]
	I0110 08:53:01.938465       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:01.938484       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 08:53:01.938492       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 08:53:01.945863       1 shared_informer.go:377] "Caches are synced"
	E0110 08:53:03.791728       1 replica_set.go:592] "Unhandled Error" err="sync \"kube-system/coredns-7d764666f9\" failed with Operation cannot be fulfilled on replicasets.apps \"coredns-7d764666f9\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0110 08:53:21.843588       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [cd6aab7d7eedcd721b18cd118b61b50ed2537807c705c6352548638a6af805d7] <==
	I0110 08:53:04.117310       1 server_linux.go:53] "Using iptables proxy"
	I0110 08:53:04.200590       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:53:04.400810       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:04.400851       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E0110 08:53:04.400964       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 08:53:04.421266       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 08:53:04.421332       1 server_linux.go:136] "Using iptables Proxier"
	I0110 08:53:04.426637       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 08:53:04.427015       1 server.go:529] "Version info" version="v1.35.0"
	I0110 08:53:04.427040       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 08:53:04.428362       1 config.go:200] "Starting service config controller"
	I0110 08:53:04.428397       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 08:53:04.428432       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 08:53:04.428454       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 08:53:04.428521       1 config.go:106] "Starting endpoint slice config controller"
	I0110 08:53:04.428555       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 08:53:04.428568       1 config.go:309] "Starting node config controller"
	I0110 08:53:04.428581       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 08:53:04.428589       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 08:53:04.529509       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 08:53:04.529568       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0110 08:53:04.529579       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [eedaf38fdc1ae3589cb54a9b599c281b7091a2c4b05bdd63f0561393660fa967] <==
	E0110 08:52:55.065286       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0110 08:52:55.065916       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0110 08:52:55.065926       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0110 08:52:55.065956       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0110 08:52:55.066099       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0110 08:52:55.066157       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E0110 08:52:55.066201       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 08:52:55.066252       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0110 08:52:55.066299       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0110 08:52:55.066325       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0110 08:52:55.066347       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0110 08:52:55.066428       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0110 08:52:55.066778       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0110 08:52:55.066787       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0110 08:52:55.067211       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0110 08:52:55.878485       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0110 08:52:55.929319       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0110 08:52:55.947838       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0110 08:52:56.002066       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0110 08:52:56.036498       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0110 08:52:56.121909       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0110 08:52:56.187329       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0110 08:52:56.250718       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0110 08:52:56.541463       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I0110 08:52:58.559818       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 08:53:03 embed-certs-072273 kubelet[1289]: I0110 08:53:03.175341    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ccn5\" (UniqueName: \"kubernetes.io/projected/09208699-0a25-4c01-ab19-c7a9ff2d82fb-kube-api-access-5ccn5\") pod \"kube-proxy-sndfh\" (UID: \"09208699-0a25-4c01-ab19-c7a9ff2d82fb\") " pod="kube-system/kube-proxy-sndfh"
	Jan 10 08:53:03 embed-certs-072273 kubelet[1289]: I0110 08:53:03.175365    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/186060f5-bf0f-47b7-bcc0-6fe86a261bdc-xtables-lock\") pod \"kindnet-svs4f\" (UID: \"186060f5-bf0f-47b7-bcc0-6fe86a261bdc\") " pod="kube-system/kindnet-svs4f"
	Jan 10 08:53:03 embed-certs-072273 kubelet[1289]: I0110 08:53:03.175389    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/186060f5-bf0f-47b7-bcc0-6fe86a261bdc-cni-cfg\") pod \"kindnet-svs4f\" (UID: \"186060f5-bf0f-47b7-bcc0-6fe86a261bdc\") " pod="kube-system/kindnet-svs4f"
	Jan 10 08:53:03 embed-certs-072273 kubelet[1289]: I0110 08:53:03.175409    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/186060f5-bf0f-47b7-bcc0-6fe86a261bdc-lib-modules\") pod \"kindnet-svs4f\" (UID: \"186060f5-bf0f-47b7-bcc0-6fe86a261bdc\") " pod="kube-system/kindnet-svs4f"
	Jan 10 08:53:03 embed-certs-072273 kubelet[1289]: I0110 08:53:03.175431    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlkmr\" (UniqueName: \"kubernetes.io/projected/186060f5-bf0f-47b7-bcc0-6fe86a261bdc-kube-api-access-nlkmr\") pod \"kindnet-svs4f\" (UID: \"186060f5-bf0f-47b7-bcc0-6fe86a261bdc\") " pod="kube-system/kindnet-svs4f"
	Jan 10 08:53:04 embed-certs-072273 kubelet[1289]: I0110 08:53:04.884501    1289 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-sndfh" podStartSLOduration=1.884488298 podStartE2EDuration="1.884488298s" podCreationTimestamp="2026-01-10 08:53:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 08:53:04.883980743 +0000 UTC m=+7.219471605" watchObservedRunningTime="2026-01-10 08:53:04.884488298 +0000 UTC m=+7.219979158"
	Jan 10 08:53:05 embed-certs-072273 kubelet[1289]: E0110 08:53:05.242973    1289 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-072273" containerName="kube-apiserver"
	Jan 10 08:53:08 embed-certs-072273 kubelet[1289]: E0110 08:53:08.448152    1289 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-072273" containerName="kube-scheduler"
	Jan 10 08:53:08 embed-certs-072273 kubelet[1289]: I0110 08:53:08.458383    1289 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-svs4f" podStartSLOduration=3.039897289 podStartE2EDuration="5.45836473s" podCreationTimestamp="2026-01-10 08:53:03 +0000 UTC" firstStartedPulling="2026-01-10 08:53:03.921637923 +0000 UTC m=+6.257128776" lastFinishedPulling="2026-01-10 08:53:06.340105169 +0000 UTC m=+8.675596217" observedRunningTime="2026-01-10 08:53:06.846862746 +0000 UTC m=+9.182353798" watchObservedRunningTime="2026-01-10 08:53:08.45836473 +0000 UTC m=+10.793855577"
	Jan 10 08:53:08 embed-certs-072273 kubelet[1289]: E0110 08:53:08.782264    1289 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-embed-certs-072273" containerName="etcd"
	Jan 10 08:53:08 embed-certs-072273 kubelet[1289]: E0110 08:53:08.840007    1289 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-embed-certs-072273" containerName="kube-scheduler"
	Jan 10 08:53:10 embed-certs-072273 kubelet[1289]: E0110 08:53:10.558954    1289 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-embed-certs-072273" containerName="kube-controller-manager"
	Jan 10 08:53:15 embed-certs-072273 kubelet[1289]: E0110 08:53:15.249426    1289 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-embed-certs-072273" containerName="kube-apiserver"
	Jan 10 08:53:17 embed-certs-072273 kubelet[1289]: I0110 08:53:17.299791    1289 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Jan 10 08:53:17 embed-certs-072273 kubelet[1289]: I0110 08:53:17.387448    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd80313d-e346-47f1-91e0-74108fb2b818-config-volume\") pod \"coredns-7d764666f9-ss4nt\" (UID: \"bd80313d-e346-47f1-91e0-74108fb2b818\") " pod="kube-system/coredns-7d764666f9-ss4nt"
	Jan 10 08:53:17 embed-certs-072273 kubelet[1289]: I0110 08:53:17.387487    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8pff\" (UniqueName: \"kubernetes.io/projected/bd80313d-e346-47f1-91e0-74108fb2b818-kube-api-access-q8pff\") pod \"coredns-7d764666f9-ss4nt\" (UID: \"bd80313d-e346-47f1-91e0-74108fb2b818\") " pod="kube-system/coredns-7d764666f9-ss4nt"
	Jan 10 08:53:17 embed-certs-072273 kubelet[1289]: I0110 08:53:17.387509    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5012cfad-347e-4dec-88ef-1e94c41f70aa-tmp\") pod \"storage-provisioner\" (UID: \"5012cfad-347e-4dec-88ef-1e94c41f70aa\") " pod="kube-system/storage-provisioner"
	Jan 10 08:53:17 embed-certs-072273 kubelet[1289]: I0110 08:53:17.387533    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq625\" (UniqueName: \"kubernetes.io/projected/5012cfad-347e-4dec-88ef-1e94c41f70aa-kube-api-access-mq625\") pod \"storage-provisioner\" (UID: \"5012cfad-347e-4dec-88ef-1e94c41f70aa\") " pod="kube-system/storage-provisioner"
	Jan 10 08:53:17 embed-certs-072273 kubelet[1289]: E0110 08:53:17.860663    1289 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-ss4nt" containerName="coredns"
	Jan 10 08:53:17 embed-certs-072273 kubelet[1289]: I0110 08:53:17.884458    1289 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-ss4nt" podStartSLOduration=14.884437463 podStartE2EDuration="14.884437463s" podCreationTimestamp="2026-01-10 08:53:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 08:53:17.873044604 +0000 UTC m=+20.208535467" watchObservedRunningTime="2026-01-10 08:53:17.884437463 +0000 UTC m=+20.219928329"
	Jan 10 08:53:17 embed-certs-072273 kubelet[1289]: I0110 08:53:17.895704    1289 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.89568129 podStartE2EDuration="13.89568129s" podCreationTimestamp="2026-01-10 08:53:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 08:53:17.885097932 +0000 UTC m=+20.220588797" watchObservedRunningTime="2026-01-10 08:53:17.89568129 +0000 UTC m=+20.231172166"
	Jan 10 08:53:18 embed-certs-072273 kubelet[1289]: E0110 08:53:18.865561    1289 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-ss4nt" containerName="coredns"
	Jan 10 08:53:19 embed-certs-072273 kubelet[1289]: E0110 08:53:19.867909    1289 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-ss4nt" containerName="coredns"
	Jan 10 08:53:20 embed-certs-072273 kubelet[1289]: I0110 08:53:20.002448    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5mtn\" (UniqueName: \"kubernetes.io/projected/87bb5117-4f07-448e-bd80-5c13abfe1ede-kube-api-access-p5mtn\") pod \"busybox\" (UID: \"87bb5117-4f07-448e-bd80-5c13abfe1ede\") " pod="default/busybox"
	Jan 10 08:53:21 embed-certs-072273 kubelet[1289]: I0110 08:53:21.885262    1289 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.737799546 podStartE2EDuration="2.885241314s" podCreationTimestamp="2026-01-10 08:53:19 +0000 UTC" firstStartedPulling="2026-01-10 08:53:20.310107065 +0000 UTC m=+22.645597921" lastFinishedPulling="2026-01-10 08:53:21.457548835 +0000 UTC m=+23.793039689" observedRunningTime="2026-01-10 08:53:21.885166414 +0000 UTC m=+24.220657277" watchObservedRunningTime="2026-01-10 08:53:21.885241314 +0000 UTC m=+24.220732176"
	
	
	==> storage-provisioner [24a830ab71761b47f09e85240306d7222680278a7b97cb47fe332f1bbde453f7] <==
	I0110 08:53:17.686913       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 08:53:17.695706       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 08:53:17.695900       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0110 08:53:17.698119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:53:17.703006       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 08:53:17.703138       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 08:53:17.703259       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"77ff52c0-7d74-49c1-b5d8-f06214a410f8", APIVersion:"v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-072273_966bf1c5-a955-48b1-982f-bbbde9a9d709 became leader
	I0110 08:53:17.703307       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-072273_966bf1c5-a955-48b1-982f-bbbde9a9d709!
	W0110 08:53:17.705485       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:53:17.710095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 08:53:17.804193       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-072273_966bf1c5-a955-48b1-982f-bbbde9a9d709!
	W0110 08:53:19.713071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:53:19.717296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:53:21.720665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:53:21.724794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:53:23.727604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:53:23.731909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:53:25.735569       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:53:25.741415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:53:27.745157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:53:27.749302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-072273 -n embed-certs-072273
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-072273 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-225354 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-225354 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (309.521239ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:53:49Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-225354 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-225354 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-225354 describe deploy/metrics-server -n kube-system: exit status 1 (87.048032ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-225354 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-225354
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-225354:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2d2060ee1efc6eaa651aa88ab17a771cbf4d2c84c916404a808d16a4e0ebc475",
	        "Created": "2026-01-10T08:53:05.098840342Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 309073,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T08:53:05.150202943Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e8ee619afcf8e9008c4fe3e7f2aba5fdbe7a9f0765053ca7dd53ab0df8ab02a5",
	        "ResolvConfPath": "/var/lib/docker/containers/2d2060ee1efc6eaa651aa88ab17a771cbf4d2c84c916404a808d16a4e0ebc475/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2d2060ee1efc6eaa651aa88ab17a771cbf4d2c84c916404a808d16a4e0ebc475/hostname",
	        "HostsPath": "/var/lib/docker/containers/2d2060ee1efc6eaa651aa88ab17a771cbf4d2c84c916404a808d16a4e0ebc475/hosts",
	        "LogPath": "/var/lib/docker/containers/2d2060ee1efc6eaa651aa88ab17a771cbf4d2c84c916404a808d16a4e0ebc475/2d2060ee1efc6eaa651aa88ab17a771cbf4d2c84c916404a808d16a4e0ebc475-json.log",
	        "Name": "/default-k8s-diff-port-225354",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-225354:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-225354",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2d2060ee1efc6eaa651aa88ab17a771cbf4d2c84c916404a808d16a4e0ebc475",
	                "LowerDir": "/var/lib/docker/overlay2/53567cf61aa1d670c6024da458ad8a084847f4bc5189cadc4f3bee860aaec98d-init/diff:/var/lib/docker/overlay2/97b14eb520192356c991915ce74cd6094e6ef948f398ac688b667ea18491ccde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/53567cf61aa1d670c6024da458ad8a084847f4bc5189cadc4f3bee860aaec98d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/53567cf61aa1d670c6024da458ad8a084847f4bc5189cadc4f3bee860aaec98d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/53567cf61aa1d670c6024da458ad8a084847f4bc5189cadc4f3bee860aaec98d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-225354",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-225354/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-225354",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-225354",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-225354",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "32a5f49127313d9a54272256978118676e67af676c4e56612b9735b6c366ae2b",
	            "SandboxKey": "/var/run/docker/netns/32a5f4912731",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-225354": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "be766c670cb0e4620e923d047ad46e6a4f2da6ed81b0b1be71e9292154f73b90",
	                    "EndpointID": "9152297cc574a20c2c058fd3f1f00f0cee3e903bced59fc34f57c87330a18343",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "e6:da:b4:5b:f8:fe",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-225354",
	                        "2d2060ee1efc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-225354 -n default-k8s-diff-port-225354
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-225354 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-225354 logs -n 25: (1.790744296s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p flannel-472660 sudo cri-dockerd --version                                                                                                                                                                                                  │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                    │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │                     │
	│ ssh     │ -p flannel-472660 sudo systemctl cat containerd --no-pager                                                                                                                                                                                    │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                             │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo cat /etc/containerd/config.toml                                                                                                                                                                                        │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo containerd config dump                                                                                                                                                                                                 │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                          │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo systemctl cat crio --no-pager                                                                                                                                                                                          │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo crio config                                                                                                                                                                                                            │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ delete  │ -p disable-driver-mounts-847921                                                                                                                                                                                                               │ disable-driver-mounts-847921 │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ start   │ -p default-k8s-diff-port-225354 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-225354 │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-093083 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-093083       │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-095312 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-095312            │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ stop    │ -p old-k8s-version-093083 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-093083       │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ stop    │ -p no-preload-095312 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-095312            │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-093083 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-093083       │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ start   │ -p old-k8s-version-093083 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-093083       │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-095312 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-095312            │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ start   │ -p no-preload-095312 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-095312            │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-072273 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-072273           │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ stop    │ -p embed-certs-072273 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-072273           │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ addons  │ enable dashboard -p embed-certs-072273 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-072273           │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ start   │ -p embed-certs-072273 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-072273           │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-225354 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-225354 │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 08:53:48
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 08:53:48.577618  319849 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:53:48.577963  319849 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:53:48.577976  319849 out.go:374] Setting ErrFile to fd 2...
	I0110 08:53:48.577982  319849 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:53:48.578302  319849 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:53:48.578919  319849 out.go:368] Setting JSON to false
	I0110 08:53:48.580523  319849 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2181,"bootTime":1768033048,"procs":369,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 08:53:48.580584  319849 start.go:143] virtualization: kvm guest
	I0110 08:53:48.582779  319849 out.go:179] * [embed-certs-072273] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 08:53:48.584034  319849 notify.go:221] Checking for updates...
	I0110 08:53:48.584061  319849 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 08:53:48.585268  319849 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 08:53:48.587031  319849 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:53:48.588859  319849 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-3641/.minikube
	I0110 08:53:48.590169  319849 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 08:53:48.591481  319849 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 08:53:48.593317  319849 config.go:182] Loaded profile config "embed-certs-072273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:53:48.593995  319849 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 08:53:48.625447  319849 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 08:53:48.625603  319849 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:53:48.696805  319849 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2026-01-10 08:53:48.684117443 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:53:48.697001  319849 docker.go:319] overlay module found
	I0110 08:53:48.699077  319849 out.go:179] * Using the docker driver based on existing profile
	I0110 08:53:48.700278  319849 start.go:309] selected driver: docker
	I0110 08:53:48.700297  319849 start.go:928] validating driver "docker" against &{Name:embed-certs-072273 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-072273 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:53:48.700398  319849 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 08:53:48.701134  319849 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:53:48.768549  319849 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2026-01-10 08:53:48.756285464 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:53:48.768911  319849 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 08:53:48.768944  319849 cni.go:84] Creating CNI manager for ""
	I0110 08:53:48.769011  319849 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 08:53:48.769104  319849 start.go:353] cluster config:
	{Name:embed-certs-072273 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:embed-certs-072273 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:53:48.771944  319849 out.go:179] * Starting "embed-certs-072273" primary control-plane node in "embed-certs-072273" cluster
	I0110 08:53:48.773462  319849 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 08:53:48.774690  319849 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 08:53:48.776069  319849 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 08:53:48.776111  319849 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0110 08:53:48.776135  319849 cache.go:65] Caching tarball of preloaded images
	I0110 08:53:48.776228  319849 preload.go:251] Found /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0110 08:53:48.776244  319849 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 08:53:48.776223  319849 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 08:53:48.776378  319849 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/embed-certs-072273/config.json ...
	I0110 08:53:48.802317  319849 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 08:53:48.802340  319849 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 08:53:48.802359  319849 cache.go:243] Successfully downloaded all kic artifacts
	I0110 08:53:48.802393  319849 start.go:360] acquireMachinesLock for embed-certs-072273: {Name:mk2e5835d14f3ed88508f9c4afd56379773523bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 08:53:48.802458  319849 start.go:364] duration metric: took 43.538µs to acquireMachinesLock for "embed-certs-072273"
	I0110 08:53:48.802482  319849 start.go:96] Skipping create...Using existing machine configuration
	I0110 08:53:48.802493  319849 fix.go:54] fixHost starting: 
	I0110 08:53:48.802780  319849 cli_runner.go:164] Run: docker container inspect embed-certs-072273 --format={{.State.Status}}
	I0110 08:53:48.826087  319849 fix.go:112] recreateIfNeeded on embed-certs-072273: state=Stopped err=<nil>
	W0110 08:53:48.826129  319849 fix.go:138] unexpected machine state, will restart: <nil>
	W0110 08:53:46.948400  313874 pod_ready.go:104] pod "coredns-7d764666f9-wpsnn" is not "Ready", error: <nil>
	W0110 08:53:48.949371  313874 pod_ready.go:104] pod "coredns-7d764666f9-wpsnn" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Jan 10 08:53:37 default-k8s-diff-port-225354 crio[779]: time="2026-01-10T08:53:37.630126698Z" level=info msg="Starting container: c50aeda1c1dec636e2bd44aaf8b96a30ad2531576fbea258821cc84068389c4d" id=b7419bfd-e88d-45f0-93bf-8b3dfca9dca9 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 08:53:37 default-k8s-diff-port-225354 crio[779]: time="2026-01-10T08:53:37.63249639Z" level=info msg="Started container" PID=1903 containerID=c50aeda1c1dec636e2bd44aaf8b96a30ad2531576fbea258821cc84068389c4d description=kube-system/coredns-7d764666f9-cjklg/coredns id=b7419bfd-e88d-45f0-93bf-8b3dfca9dca9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5c54769a617970b73f9a80651ed8ce33dc31bafd5ba7904381e50b41b767806d
	Jan 10 08:53:41 default-k8s-diff-port-225354 crio[779]: time="2026-01-10T08:53:41.109554488Z" level=info msg="Running pod sandbox: default/busybox/POD" id=70243571-b835-40fa-a395-e10ac49e99fc name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 08:53:41 default-k8s-diff-port-225354 crio[779]: time="2026-01-10T08:53:41.109656417Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:53:41 default-k8s-diff-port-225354 crio[779]: time="2026-01-10T08:53:41.116232919Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:5d0969c92c6ef151d0889fc664a6ad26ada98d38db2bb1c61da31fad3ef7027b UID:b4493b91-1903-4206-9dce-fe0d85c95ef9 NetNS:/var/run/netns/78a0d869-ad66-4b31-860d-56d9b046e630 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0003188b8}] Aliases:map[]}"
	Jan 10 08:53:41 default-k8s-diff-port-225354 crio[779]: time="2026-01-10T08:53:41.116273783Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Jan 10 08:53:41 default-k8s-diff-port-225354 crio[779]: time="2026-01-10T08:53:41.137394388Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:5d0969c92c6ef151d0889fc664a6ad26ada98d38db2bb1c61da31fad3ef7027b UID:b4493b91-1903-4206-9dce-fe0d85c95ef9 NetNS:/var/run/netns/78a0d869-ad66-4b31-860d-56d9b046e630 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0003188b8}] Aliases:map[]}"
	Jan 10 08:53:41 default-k8s-diff-port-225354 crio[779]: time="2026-01-10T08:53:41.137571443Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Jan 10 08:53:41 default-k8s-diff-port-225354 crio[779]: time="2026-01-10T08:53:41.138699227Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Jan 10 08:53:41 default-k8s-diff-port-225354 crio[779]: time="2026-01-10T08:53:41.140061121Z" level=info msg="Ran pod sandbox 5d0969c92c6ef151d0889fc664a6ad26ada98d38db2bb1c61da31fad3ef7027b with infra container: default/busybox/POD" id=70243571-b835-40fa-a395-e10ac49e99fc name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 08:53:41 default-k8s-diff-port-225354 crio[779]: time="2026-01-10T08:53:41.141515348Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cbd30f4d-5794-4c31-be0e-e309a6038565 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:53:41 default-k8s-diff-port-225354 crio[779]: time="2026-01-10T08:53:41.14167889Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=cbd30f4d-5794-4c31-be0e-e309a6038565 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:53:41 default-k8s-diff-port-225354 crio[779]: time="2026-01-10T08:53:41.141873181Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=cbd30f4d-5794-4c31-be0e-e309a6038565 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:53:41 default-k8s-diff-port-225354 crio[779]: time="2026-01-10T08:53:41.142766383Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=777c3708-64b0-4370-9f85-d087d49f9762 name=/runtime.v1.ImageService/PullImage
	Jan 10 08:53:41 default-k8s-diff-port-225354 crio[779]: time="2026-01-10T08:53:41.143143034Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Jan 10 08:53:42 default-k8s-diff-port-225354 crio[779]: time="2026-01-10T08:53:42.351171043Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=777c3708-64b0-4370-9f85-d087d49f9762 name=/runtime.v1.ImageService/PullImage
	Jan 10 08:53:42 default-k8s-diff-port-225354 crio[779]: time="2026-01-10T08:53:42.351796843Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4be52207-7507-4300-a9cd-afb56957e8d8 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:53:42 default-k8s-diff-port-225354 crio[779]: time="2026-01-10T08:53:42.353850843Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=996876b4-9d31-4218-ba26-0832ab25ece3 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:53:42 default-k8s-diff-port-225354 crio[779]: time="2026-01-10T08:53:42.357643719Z" level=info msg="Creating container: default/busybox/busybox" id=c7c3553a-401d-4de6-af17-fb8c11eae2e9 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:53:42 default-k8s-diff-port-225354 crio[779]: time="2026-01-10T08:53:42.357801699Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:53:42 default-k8s-diff-port-225354 crio[779]: time="2026-01-10T08:53:42.36217629Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:53:42 default-k8s-diff-port-225354 crio[779]: time="2026-01-10T08:53:42.362846321Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:53:42 default-k8s-diff-port-225354 crio[779]: time="2026-01-10T08:53:42.396522567Z" level=info msg="Created container bcc12969ee8baa82bf64675a3ba05dac15e90596def9aa1956ab88c16266d746: default/busybox/busybox" id=c7c3553a-401d-4de6-af17-fb8c11eae2e9 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:53:42 default-k8s-diff-port-225354 crio[779]: time="2026-01-10T08:53:42.397266502Z" level=info msg="Starting container: bcc12969ee8baa82bf64675a3ba05dac15e90596def9aa1956ab88c16266d746" id=eb65f7e6-b8bf-4c02-a1b4-7bd37490960e name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 08:53:42 default-k8s-diff-port-225354 crio[779]: time="2026-01-10T08:53:42.399448739Z" level=info msg="Started container" PID=1983 containerID=bcc12969ee8baa82bf64675a3ba05dac15e90596def9aa1956ab88c16266d746 description=default/busybox/busybox id=eb65f7e6-b8bf-4c02-a1b4-7bd37490960e name=/runtime.v1.RuntimeService/StartContainer sandboxID=5d0969c92c6ef151d0889fc664a6ad26ada98d38db2bb1c61da31fad3ef7027b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	bcc12969ee8ba       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   5d0969c92c6ef       busybox                                                default
	c50aeda1c1dec       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      12 seconds ago      Running             coredns                   0                   5c54769a61797       coredns-7d764666f9-cjklg                               kube-system
	c37668b9ec115       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   03a6ff2196f9f       storage-provisioner                                    kube-system
	4e0f3f8180ce4       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    23 seconds ago      Running             kindnet-cni               0                   7fc94b9db9184       kindnet-sd4nd                                          kube-system
	aa8800f306ce0       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                      25 seconds ago      Running             kube-proxy                0                   03b169fa57f1c       kube-proxy-fbfrd                                       kube-system
	a5135e8243839       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                      35 seconds ago      Running             kube-scheduler            0                   b47414c4b2ee6       kube-scheduler-default-k8s-diff-port-225354            kube-system
	105e8538c058f       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                      35 seconds ago      Running             etcd                      0                   fdac0442d2409       etcd-default-k8s-diff-port-225354                      kube-system
	618623847bc37       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                      35 seconds ago      Running             kube-controller-manager   0                   51c6e5a18907a       kube-controller-manager-default-k8s-diff-port-225354   kube-system
	91e87b6144065       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                      35 seconds ago      Running             kube-apiserver            0                   da306cf8e5b89       kube-apiserver-default-k8s-diff-port-225354            kube-system
	
	
	==> coredns [c50aeda1c1dec636e2bd44aaf8b96a30ad2531576fbea258821cc84068389c4d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:58553 - 32106 "HINFO IN 5281336240195965541.1567162336741320770. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021588994s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-225354
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-225354
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee
	                    minikube.k8s.io/name=default-k8s-diff-port-225354
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T08_53_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 08:53:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-225354
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 08:53:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 08:53:50 +0000   Sat, 10 Jan 2026 08:53:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 08:53:50 +0000   Sat, 10 Jan 2026 08:53:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 08:53:50 +0000   Sat, 10 Jan 2026 08:53:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 08:53:50 +0000   Sat, 10 Jan 2026 08:53:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-225354
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 73d0d7ed7bc783a051b563196960b035
	  System UUID:                5a40150e-8f76-4d08-b9ae-bb32149e49ad
	  Boot ID:                    7c12f492-59b6-440b-986c-741ece916a23
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-7d764666f9-cjklg                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-default-k8s-diff-port-225354                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-sd4nd                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-default-k8s-diff-port-225354             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-225354    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-fbfrd                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-default-k8s-diff-port-225354             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  27s   node-controller  Node default-k8s-diff-port-225354 event: Registered Node default-k8s-diff-port-225354 in Controller
	
	
	==> dmesg <==
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 4a 03 46 88 66 4e 08 06
	[  +6.406566] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7a 4f c9 a1 45 f4 08 06
	[  +0.001429] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca 5b 5b a9 7c eb 08 06
	[ +24.002020] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7a 6f 49 7e 9d d1 08 06
	[  +0.000366] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e 3a ac 18 98 c9 08 06
	[Jan10 08:52] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4a 05 5c 86 cf df 08 06
	[  +0.000337] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 7a 4f c9 a1 45 f4 08 06
	[  +0.000602] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca 5b 5b a9 7c eb 08 06
	[  +0.739059] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e 32 81 dc 0a 7c 08 06
	[  +1.988700] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 6e ec e4 aa 3d 08 06
	[ +20.040962] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 ac f8 cf fb 84 08 06
	[  +0.000375] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 6e ec e4 aa 3d 08 06
	[  +0.000915] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 32 81 dc 0a 7c 08 06
	
	
	==> etcd [105e8538c058f05f9bf8f90107c7b066f61f248cdadf1e497b602dc6b288afe7] <==
	{"level":"info","ts":"2026-01-10T08:53:15.108503Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-10T08:53:15.900259Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2026-01-10T08:53:15.900364Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2026-01-10T08:53:15.900437Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2026-01-10T08:53:15.900452Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T08:53:15.900474Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2026-01-10T08:53:15.901166Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-10T08:53:15.901205Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T08:53:15.901222Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2026-01-10T08:53:15.901229Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-10T08:53:15.901882Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T08:53:15.902484Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:default-k8s-diff-port-225354 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T08:53:15.902491Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T08:53:15.902512Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T08:53:15.902756Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T08:53:15.902753Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T08:53:15.902787Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T08:53:15.902887Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T08:53:15.902927Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T08:53:15.902952Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-10T08:53:15.903055Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2026-01-10T08:53:15.903650Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T08:53:15.904190Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T08:53:15.906642Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T08:53:15.906827Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 08:53:51 up 36 min,  0 user,  load average: 5.76, 4.20, 2.64
	Linux default-k8s-diff-port-225354 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4e0f3f8180ce4116bc9f06ead0a6f9f1106c3049571dd20a9eaa12f63c7900f7] <==
	I0110 08:53:26.803067       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 08:53:26.803327       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0110 08:53:26.803492       1 main.go:148] setting mtu 1500 for CNI 
	I0110 08:53:26.803515       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 08:53:26.803534       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T08:53:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 08:53:27.102397       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 08:53:27.102515       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 08:53:27.102530       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 08:53:27.200086       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 08:53:27.499916       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 08:53:27.499947       1 metrics.go:72] Registering metrics
	I0110 08:53:27.500025       1 controller.go:711] "Syncing nftables rules"
	I0110 08:53:37.103194       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 08:53:37.103267       1 main.go:301] handling current node
	I0110 08:53:47.104855       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 08:53:47.104925       1 main.go:301] handling current node
	
	
	==> kube-apiserver [91e87b614406591f529f4d28b97c10725111baad8f8bfcb2312a7aa129141a07] <==
	I0110 08:53:16.873507       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0110 08:53:16.873558       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0110 08:53:16.873566       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0110 08:53:16.874871       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I0110 08:53:16.874958       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 08:53:16.879054       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 08:53:17.064072       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 08:53:17.775743       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I0110 08:53:17.779409       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I0110 08:53:17.779426       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 08:53:18.234945       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 08:53:18.271338       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 08:53:18.380794       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0110 08:53:18.387114       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I0110 08:53:18.388426       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 08:53:18.392882       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 08:53:18.803424       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 08:53:19.565281       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 08:53:19.574572       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0110 08:53:19.581987       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0110 08:53:24.455403       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 08:53:24.506949       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 08:53:24.512691       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 08:53:24.804188       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E0110 08:53:48.918793       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:46604: use of closed network connection
	
	
	==> kube-controller-manager [618623847bc37a0d6743b2705ec61231b7d176b9ece7c5ed8087e048685915d4] <==
	I0110 08:53:23.615362       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:23.615413       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:23.615419       1 range_allocator.go:177] "Sending events to api server"
	I0110 08:53:23.615469       1 range_allocator.go:181] "Starting range CIDR allocator"
	I0110 08:53:23.615475       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:53:23.615482       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:23.615585       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:23.615957       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:23.616002       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:23.616076       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:23.617975       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:23.619429       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:23.619519       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:23.619537       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:23.619520       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:23.619522       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:23.619910       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:23.620020       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:23.622888       1 range_allocator.go:433] "Set node PodCIDR" node="default-k8s-diff-port-225354" podCIDRs=["10.244.0.0/24"]
	I0110 08:53:23.629877       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:23.713756       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:23.714844       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:23.714863       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 08:53:23.714870       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 08:53:38.610498       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [aa8800f306ce0d80a4e94cca28422049453df7fb22671239813084b255cfe584] <==
	I0110 08:53:25.353460       1 server_linux.go:53] "Using iptables proxy"
	I0110 08:53:25.426020       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:53:25.526163       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:25.526204       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0110 08:53:25.526308       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 08:53:25.550993       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 08:53:25.551067       1 server_linux.go:136] "Using iptables Proxier"
	I0110 08:53:25.558340       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 08:53:25.558918       1 server.go:529] "Version info" version="v1.35.0"
	I0110 08:53:25.558943       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 08:53:25.560472       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 08:53:25.560507       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 08:53:25.560599       1 config.go:200] "Starting service config controller"
	I0110 08:53:25.560606       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 08:53:25.560654       1 config.go:106] "Starting endpoint slice config controller"
	I0110 08:53:25.560663       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 08:53:25.560664       1 config.go:309] "Starting node config controller"
	I0110 08:53:25.560672       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 08:53:25.560680       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 08:53:25.660698       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 08:53:25.660894       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0110 08:53:25.662082       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [a5135e824383987ecc928ec98039e0896eec07ce36124e337e1ec1778e3a3c14] <==
	E0110 08:53:16.827381       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0110 08:53:16.828020       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0110 08:53:16.828042       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0110 08:53:16.828124       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0110 08:53:16.828166       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0110 08:53:16.828153       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0110 08:53:16.828384       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0110 08:53:16.828447       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0110 08:53:16.828504       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 08:53:16.828545       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0110 08:53:16.828635       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0110 08:53:16.828689       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0110 08:53:16.828898       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0110 08:53:16.829086       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0110 08:53:17.760249       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0110 08:53:17.791934       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E0110 08:53:17.836713       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0110 08:53:17.850719       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E0110 08:53:17.880507       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0110 08:53:17.917541       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 08:53:17.974926       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0110 08:53:17.977693       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0110 08:53:18.009846       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0110 08:53:18.029887       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	I0110 08:53:20.622096       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 08:53:24 default-k8s-diff-port-225354 kubelet[1310]: I0110 08:53:24.933696    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvx47\" (UniqueName: \"kubernetes.io/projected/ca5dfc30-5416-4215-a090-edbc4a878737-kube-api-access-wvx47\") pod \"kube-proxy-fbfrd\" (UID: \"ca5dfc30-5416-4215-a090-edbc4a878737\") " pod="kube-system/kube-proxy-fbfrd"
	Jan 10 08:53:24 default-k8s-diff-port-225354 kubelet[1310]: I0110 08:53:24.933841    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/24ae2cd1-793e-4c82-b6f7-eace35334eba-cni-cfg\") pod \"kindnet-sd4nd\" (UID: \"24ae2cd1-793e-4c82-b6f7-eace35334eba\") " pod="kube-system/kindnet-sd4nd"
	Jan 10 08:53:24 default-k8s-diff-port-225354 kubelet[1310]: I0110 08:53:24.933873    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/24ae2cd1-793e-4c82-b6f7-eace35334eba-xtables-lock\") pod \"kindnet-sd4nd\" (UID: \"24ae2cd1-793e-4c82-b6f7-eace35334eba\") " pod="kube-system/kindnet-sd4nd"
	Jan 10 08:53:24 default-k8s-diff-port-225354 kubelet[1310]: I0110 08:53:24.933928    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24ae2cd1-793e-4c82-b6f7-eace35334eba-lib-modules\") pod \"kindnet-sd4nd\" (UID: \"24ae2cd1-793e-4c82-b6f7-eace35334eba\") " pod="kube-system/kindnet-sd4nd"
	Jan 10 08:53:24 default-k8s-diff-port-225354 kubelet[1310]: I0110 08:53:24.934029    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ca5dfc30-5416-4215-a090-edbc4a878737-xtables-lock\") pod \"kube-proxy-fbfrd\" (UID: \"ca5dfc30-5416-4215-a090-edbc4a878737\") " pod="kube-system/kube-proxy-fbfrd"
	Jan 10 08:53:24 default-k8s-diff-port-225354 kubelet[1310]: I0110 08:53:24.934066    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ca5dfc30-5416-4215-a090-edbc4a878737-lib-modules\") pod \"kube-proxy-fbfrd\" (UID: \"ca5dfc30-5416-4215-a090-edbc4a878737\") " pod="kube-system/kube-proxy-fbfrd"
	Jan 10 08:53:25 default-k8s-diff-port-225354 kubelet[1310]: E0110 08:53:25.367516    1310 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-225354" containerName="etcd"
	Jan 10 08:53:27 default-k8s-diff-port-225354 kubelet[1310]: I0110 08:53:27.458482    1310 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-fbfrd" podStartSLOduration=3.458464388 podStartE2EDuration="3.458464388s" podCreationTimestamp="2026-01-10 08:53:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 08:53:25.464843859 +0000 UTC m=+6.138300433" watchObservedRunningTime="2026-01-10 08:53:27.458464388 +0000 UTC m=+8.131920962"
	Jan 10 08:53:28 default-k8s-diff-port-225354 kubelet[1310]: E0110 08:53:28.486012    1310 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-default-k8s-diff-port-225354" containerName="kube-apiserver"
	Jan 10 08:53:28 default-k8s-diff-port-225354 kubelet[1310]: I0110 08:53:28.499872    1310 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-sd4nd" podStartSLOduration=3.078042469 podStartE2EDuration="4.499847407s" podCreationTimestamp="2026-01-10 08:53:24 +0000 UTC" firstStartedPulling="2026-01-10 08:53:25.175083264 +0000 UTC m=+5.848539834" lastFinishedPulling="2026-01-10 08:53:26.596888218 +0000 UTC m=+7.270344772" observedRunningTime="2026-01-10 08:53:27.459494525 +0000 UTC m=+8.132951099" watchObservedRunningTime="2026-01-10 08:53:28.499847407 +0000 UTC m=+9.173303982"
	Jan 10 08:53:31 default-k8s-diff-port-225354 kubelet[1310]: E0110 08:53:31.635577    1310 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-default-k8s-diff-port-225354" containerName="kube-scheduler"
	Jan 10 08:53:32 default-k8s-diff-port-225354 kubelet[1310]: E0110 08:53:32.024393    1310 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-default-k8s-diff-port-225354" containerName="kube-controller-manager"
	Jan 10 08:53:35 default-k8s-diff-port-225354 kubelet[1310]: E0110 08:53:35.369197    1310 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-default-k8s-diff-port-225354" containerName="etcd"
	Jan 10 08:53:37 default-k8s-diff-port-225354 kubelet[1310]: I0110 08:53:37.244053    1310 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Jan 10 08:53:37 default-k8s-diff-port-225354 kubelet[1310]: I0110 08:53:37.336813    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e79f65c-0d71-4a6d-9745-cabfb1e2510a-config-volume\") pod \"coredns-7d764666f9-cjklg\" (UID: \"7e79f65c-0d71-4a6d-9745-cabfb1e2510a\") " pod="kube-system/coredns-7d764666f9-cjklg"
	Jan 10 08:53:37 default-k8s-diff-port-225354 kubelet[1310]: I0110 08:53:37.336856    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/740928ba-bed5-4e17-bbba-ff0e40407f88-tmp\") pod \"storage-provisioner\" (UID: \"740928ba-bed5-4e17-bbba-ff0e40407f88\") " pod="kube-system/storage-provisioner"
	Jan 10 08:53:37 default-k8s-diff-port-225354 kubelet[1310]: I0110 08:53:37.336893    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd4pd\" (UniqueName: \"kubernetes.io/projected/740928ba-bed5-4e17-bbba-ff0e40407f88-kube-api-access-bd4pd\") pod \"storage-provisioner\" (UID: \"740928ba-bed5-4e17-bbba-ff0e40407f88\") " pod="kube-system/storage-provisioner"
	Jan 10 08:53:37 default-k8s-diff-port-225354 kubelet[1310]: I0110 08:53:37.336915    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k76t9\" (UniqueName: \"kubernetes.io/projected/7e79f65c-0d71-4a6d-9745-cabfb1e2510a-kube-api-access-k76t9\") pod \"coredns-7d764666f9-cjklg\" (UID: \"7e79f65c-0d71-4a6d-9745-cabfb1e2510a\") " pod="kube-system/coredns-7d764666f9-cjklg"
	Jan 10 08:53:38 default-k8s-diff-port-225354 kubelet[1310]: E0110 08:53:38.469915    1310 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-cjklg" containerName="coredns"
	Jan 10 08:53:38 default-k8s-diff-port-225354 kubelet[1310]: E0110 08:53:38.493579    1310 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-default-k8s-diff-port-225354" containerName="kube-apiserver"
	Jan 10 08:53:38 default-k8s-diff-port-225354 kubelet[1310]: I0110 08:53:38.501113    1310 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-cjklg" podStartSLOduration=14.501092965 podStartE2EDuration="14.501092965s" podCreationTimestamp="2026-01-10 08:53:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 08:53:38.500577354 +0000 UTC m=+19.174033929" watchObservedRunningTime="2026-01-10 08:53:38.501092965 +0000 UTC m=+19.174549542"
	Jan 10 08:53:38 default-k8s-diff-port-225354 kubelet[1310]: I0110 08:53:38.501259    1310 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.501249548 podStartE2EDuration="13.501249548s" podCreationTimestamp="2026-01-10 08:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 08:53:38.483974156 +0000 UTC m=+19.157430730" watchObservedRunningTime="2026-01-10 08:53:38.501249548 +0000 UTC m=+19.174706121"
	Jan 10 08:53:39 default-k8s-diff-port-225354 kubelet[1310]: E0110 08:53:39.471666    1310 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-cjklg" containerName="coredns"
	Jan 10 08:53:40 default-k8s-diff-port-225354 kubelet[1310]: E0110 08:53:40.475806    1310 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-cjklg" containerName="coredns"
	Jan 10 08:53:40 default-k8s-diff-port-225354 kubelet[1310]: I0110 08:53:40.856892    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjsd5\" (UniqueName: \"kubernetes.io/projected/b4493b91-1903-4206-9dce-fe0d85c95ef9-kube-api-access-kjsd5\") pod \"busybox\" (UID: \"b4493b91-1903-4206-9dce-fe0d85c95ef9\") " pod="default/busybox"
	
	
	==> storage-provisioner [c37668b9ec1156d4896604f1c9d98053453f6e2b955f788eafa02563cca8d5fb] <==
	I0110 08:53:37.639824       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 08:53:37.649438       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 08:53:37.649501       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0110 08:53:37.651471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:53:37.655657       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 08:53:37.655824       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 08:53:37.655947       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-225354_d3952ebe-e1f1-4211-bc33-480f36a8cf8f!
	I0110 08:53:37.655946       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ec9aed77-6d7b-4b77-832d-6c05972cbbb9", APIVersion:"v1", ResourceVersion:"413", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-225354_d3952ebe-e1f1-4211-bc33-480f36a8cf8f became leader
	W0110 08:53:37.659017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:53:37.663236       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 08:53:37.756922       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-225354_d3952ebe-e1f1-4211-bc33-480f36a8cf8f!
	W0110 08:53:39.666923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:53:39.671810       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:53:41.676306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:53:41.682491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:53:43.685928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:53:43.689997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:53:45.694423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:53:45.700761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:53:47.751317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:53:47.772572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:53:49.775945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:53:49.782370       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-225354 -n default-k8s-diff-port-225354
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-225354 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-093083 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-093083 --alsologtostderr -v=1: exit status 80 (1.70312766s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-093083 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:54:18.929887  325658 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:54:18.930621  325658 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:54:18.930633  325658 out.go:374] Setting ErrFile to fd 2...
	I0110 08:54:18.930638  325658 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:54:18.930844  325658 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:54:18.931103  325658 out.go:368] Setting JSON to false
	I0110 08:54:18.931123  325658 mustload.go:66] Loading cluster: old-k8s-version-093083
	I0110 08:54:18.931477  325658 config.go:182] Loaded profile config "old-k8s-version-093083": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I0110 08:54:18.931877  325658 cli_runner.go:164] Run: docker container inspect old-k8s-version-093083 --format={{.State.Status}}
	I0110 08:54:18.952896  325658 host.go:66] Checking if "old-k8s-version-093083" exists ...
	I0110 08:54:18.953246  325658 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:54:19.024229  325658 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2026-01-10 08:54:19.010153872 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:54:19.024935  325658 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22376/minikube-v1.37.0-1767438792-22376-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1767438792-22376/minikube-v1.37.0-1767438792-22376-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1767438792-22376-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:old-k8s-version-093083 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s
(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0110 08:54:19.027419  325658 out.go:179] * Pausing node old-k8s-version-093083 ... 
	I0110 08:54:19.028685  325658 host.go:66] Checking if "old-k8s-version-093083" exists ...
	I0110 08:54:19.028981  325658 ssh_runner.go:195] Run: systemctl --version
	I0110 08:54:19.029027  325658 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-093083
	I0110 08:54:19.049035  325658 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/old-k8s-version-093083/id_rsa Username:docker}
	I0110 08:54:19.149781  325658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:54:19.162691  325658 pause.go:52] kubelet running: true
	I0110 08:54:19.162769  325658 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 08:54:19.343894  325658 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 08:54:19.343998  325658 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 08:54:19.423013  325658 cri.go:96] found id: "cf2021f237b7a23412332d927f1e3fc61448f19ce766d0d96d5a317c9855bb65"
	I0110 08:54:19.423039  325658 cri.go:96] found id: "baf39ba3661d5e4554402c182c54915a779e9c2589d4896b8c74b818186d2e2e"
	I0110 08:54:19.423047  325658 cri.go:96] found id: "be67c5d068c39c6b842d3d6eabf77430720ba8af75b51a3ea51b3a1d05abe021"
	I0110 08:54:19.423054  325658 cri.go:96] found id: "b731c34ce0dc7be8cef547822c5515bcc237062ed93d07c59c1bf099151ddcd5"
	I0110 08:54:19.423060  325658 cri.go:96] found id: "d4e39023b51206e0be48a97c34283a8e61c92fde7bfdd6b8d4de4724d840f8df"
	I0110 08:54:19.423066  325658 cri.go:96] found id: "dd24142d016939bba737e4aa2e124d9cca83e550da432b869538feff1f575331"
	I0110 08:54:19.423071  325658 cri.go:96] found id: "77835742c9e4e9169a8997b0af913ac31071a741ce055172f48d6e40e8bb0dfa"
	I0110 08:54:19.423082  325658 cri.go:96] found id: "45c05fec75b0148f5c10bc223ec4f3c0de54145a816e2131a615a16966edecc9"
	I0110 08:54:19.423088  325658 cri.go:96] found id: "a34eedbb84c37c48a1a753a25087a48c8b7295f35503fe8d9738f819582226fa"
	I0110 08:54:19.423101  325658 cri.go:96] found id: "1aa38e065133620305b9ea5ba3cc57e2dd22a6a1fcb024cc2b81ed4b8495d94a"
	I0110 08:54:19.423122  325658 cri.go:96] found id: "e605b843e423ec01dd112548d07dcbdec1954f9df6ba09936c682d71de576f93"
	I0110 08:54:19.423131  325658 cri.go:96] found id: ""
	I0110 08:54:19.423182  325658 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 08:54:19.439502  325658 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:54:19Z" level=error msg="open /run/runc: no such file or directory"
	I0110 08:54:19.754786  325658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:54:19.770011  325658 pause.go:52] kubelet running: false
	I0110 08:54:19.770070  325658 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 08:54:19.948156  325658 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 08:54:19.948249  325658 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 08:54:20.028451  325658 cri.go:96] found id: "cf2021f237b7a23412332d927f1e3fc61448f19ce766d0d96d5a317c9855bb65"
	I0110 08:54:20.028476  325658 cri.go:96] found id: "baf39ba3661d5e4554402c182c54915a779e9c2589d4896b8c74b818186d2e2e"
	I0110 08:54:20.028482  325658 cri.go:96] found id: "be67c5d068c39c6b842d3d6eabf77430720ba8af75b51a3ea51b3a1d05abe021"
	I0110 08:54:20.028487  325658 cri.go:96] found id: "b731c34ce0dc7be8cef547822c5515bcc237062ed93d07c59c1bf099151ddcd5"
	I0110 08:54:20.028491  325658 cri.go:96] found id: "d4e39023b51206e0be48a97c34283a8e61c92fde7bfdd6b8d4de4724d840f8df"
	I0110 08:54:20.028513  325658 cri.go:96] found id: "dd24142d016939bba737e4aa2e124d9cca83e550da432b869538feff1f575331"
	I0110 08:54:20.028518  325658 cri.go:96] found id: "77835742c9e4e9169a8997b0af913ac31071a741ce055172f48d6e40e8bb0dfa"
	I0110 08:54:20.028522  325658 cri.go:96] found id: "45c05fec75b0148f5c10bc223ec4f3c0de54145a816e2131a615a16966edecc9"
	I0110 08:54:20.028527  325658 cri.go:96] found id: "a34eedbb84c37c48a1a753a25087a48c8b7295f35503fe8d9738f819582226fa"
	I0110 08:54:20.028536  325658 cri.go:96] found id: "1aa38e065133620305b9ea5ba3cc57e2dd22a6a1fcb024cc2b81ed4b8495d94a"
	I0110 08:54:20.028540  325658 cri.go:96] found id: "e605b843e423ec01dd112548d07dcbdec1954f9df6ba09936c682d71de576f93"
	I0110 08:54:20.028546  325658 cri.go:96] found id: ""
	I0110 08:54:20.028616  325658 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 08:54:20.261653  325658 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:54:20.275606  325658 pause.go:52] kubelet running: false
	I0110 08:54:20.275650  325658 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 08:54:20.464784  325658 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 08:54:20.464868  325658 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 08:54:20.543524  325658 cri.go:96] found id: "cf2021f237b7a23412332d927f1e3fc61448f19ce766d0d96d5a317c9855bb65"
	I0110 08:54:20.543552  325658 cri.go:96] found id: "baf39ba3661d5e4554402c182c54915a779e9c2589d4896b8c74b818186d2e2e"
	I0110 08:54:20.543559  325658 cri.go:96] found id: "be67c5d068c39c6b842d3d6eabf77430720ba8af75b51a3ea51b3a1d05abe021"
	I0110 08:54:20.543565  325658 cri.go:96] found id: "b731c34ce0dc7be8cef547822c5515bcc237062ed93d07c59c1bf099151ddcd5"
	I0110 08:54:20.543571  325658 cri.go:96] found id: "d4e39023b51206e0be48a97c34283a8e61c92fde7bfdd6b8d4de4724d840f8df"
	I0110 08:54:20.543578  325658 cri.go:96] found id: "dd24142d016939bba737e4aa2e124d9cca83e550da432b869538feff1f575331"
	I0110 08:54:20.543584  325658 cri.go:96] found id: "77835742c9e4e9169a8997b0af913ac31071a741ce055172f48d6e40e8bb0dfa"
	I0110 08:54:20.543590  325658 cri.go:96] found id: "45c05fec75b0148f5c10bc223ec4f3c0de54145a816e2131a615a16966edecc9"
	I0110 08:54:20.543595  325658 cri.go:96] found id: "a34eedbb84c37c48a1a753a25087a48c8b7295f35503fe8d9738f819582226fa"
	I0110 08:54:20.543604  325658 cri.go:96] found id: "1aa38e065133620305b9ea5ba3cc57e2dd22a6a1fcb024cc2b81ed4b8495d94a"
	I0110 08:54:20.543614  325658 cri.go:96] found id: "e605b843e423ec01dd112548d07dcbdec1954f9df6ba09936c682d71de576f93"
	I0110 08:54:20.543620  325658 cri.go:96] found id: ""
	I0110 08:54:20.543669  325658 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 08:54:20.560165  325658 out.go:203] 
	W0110 08:54:20.562174  325658 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:54:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:54:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 08:54:20.562197  325658 out.go:285] * 
	* 
	W0110 08:54:20.564671  325658 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 08:54:20.566054  325658 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-093083 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-093083
helpers_test.go:244: (dbg) docker inspect old-k8s-version-093083:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5a78f6c87c300c6e0dd5f3bbe31e9ea3aaf153e366cdb192469aeff0f98607cc",
	        "Created": "2026-01-10T08:52:09.133397359Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 313349,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T08:53:22.935205602Z",
	            "FinishedAt": "2026-01-10T08:53:22.00820023Z"
	        },
	        "Image": "sha256:e8ee619afcf8e9008c4fe3e7f2aba5fdbe7a9f0765053ca7dd53ab0df8ab02a5",
	        "ResolvConfPath": "/var/lib/docker/containers/5a78f6c87c300c6e0dd5f3bbe31e9ea3aaf153e366cdb192469aeff0f98607cc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5a78f6c87c300c6e0dd5f3bbe31e9ea3aaf153e366cdb192469aeff0f98607cc/hostname",
	        "HostsPath": "/var/lib/docker/containers/5a78f6c87c300c6e0dd5f3bbe31e9ea3aaf153e366cdb192469aeff0f98607cc/hosts",
	        "LogPath": "/var/lib/docker/containers/5a78f6c87c300c6e0dd5f3bbe31e9ea3aaf153e366cdb192469aeff0f98607cc/5a78f6c87c300c6e0dd5f3bbe31e9ea3aaf153e366cdb192469aeff0f98607cc-json.log",
	        "Name": "/old-k8s-version-093083",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-093083:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-093083",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5a78f6c87c300c6e0dd5f3bbe31e9ea3aaf153e366cdb192469aeff0f98607cc",
	                "LowerDir": "/var/lib/docker/overlay2/d070b5e56f95f0eb086a5bbe43eeabd880e14f061ffc4bc06dcbc47a66b72ad3-init/diff:/var/lib/docker/overlay2/97b14eb520192356c991915ce74cd6094e6ef948f398ac688b667ea18491ccde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d070b5e56f95f0eb086a5bbe43eeabd880e14f061ffc4bc06dcbc47a66b72ad3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d070b5e56f95f0eb086a5bbe43eeabd880e14f061ffc4bc06dcbc47a66b72ad3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d070b5e56f95f0eb086a5bbe43eeabd880e14f061ffc4bc06dcbc47a66b72ad3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-093083",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-093083/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-093083",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-093083",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-093083",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "fb9ba28ee210e086c1d182dde461a21b43a346a88d832ca6296c1445ef1fb399",
	            "SandboxKey": "/var/run/docker/netns/fb9ba28ee210",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-093083": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b8ccbd7d681c9cf4758976716607eccd2bce1e9581afb9f0c4894b2bbb7e4533",
	                    "EndpointID": "a047f574bbf3108209bd32fdb520f9899decca45a8535b473f54d9320f6f26ed",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "82:dd:97:63:ca:78",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-093083",
	                        "5a78f6c87c30"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-093083 -n old-k8s-version-093083
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-093083 -n old-k8s-version-093083: exit status 2 (359.96142ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-093083 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-093083 logs -n 25: (1.139252434s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p flannel-472660 sudo containerd config dump                                                                                                                                                                                                 │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                          │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo systemctl cat crio --no-pager                                                                                                                                                                                          │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo crio config                                                                                                                                                                                                            │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ delete  │ -p disable-driver-mounts-847921                                                                                                                                                                                                               │ disable-driver-mounts-847921 │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ start   │ -p default-k8s-diff-port-225354 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-225354 │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-093083 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-093083       │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-095312 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-095312            │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ stop    │ -p old-k8s-version-093083 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-093083       │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ stop    │ -p no-preload-095312 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-095312            │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-093083 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-093083       │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ start   │ -p old-k8s-version-093083 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-093083       │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:54 UTC │
	│ addons  │ enable dashboard -p no-preload-095312 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-095312            │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ start   │ -p no-preload-095312 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-095312            │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:54 UTC │
	│ addons  │ enable metrics-server -p embed-certs-072273 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-072273           │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ stop    │ -p embed-certs-072273 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-072273           │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ addons  │ enable dashboard -p embed-certs-072273 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-072273           │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ start   │ -p embed-certs-072273 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-072273           │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-225354 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-225354 │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-225354 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-225354 │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:54 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-225354 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-225354 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p default-k8s-diff-port-225354 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-225354 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ image   │ old-k8s-version-093083 image list --format=json                                                                                                                                                                                               │ old-k8s-version-093083       │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ pause   │ -p old-k8s-version-093083 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-093083       │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 08:54:10
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 08:54:10.770403  323767 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:54:10.770628  323767 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:54:10.770636  323767 out.go:374] Setting ErrFile to fd 2...
	I0110 08:54:10.770640  323767 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:54:10.770838  323767 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:54:10.771277  323767 out.go:368] Setting JSON to false
	I0110 08:54:10.772500  323767 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2203,"bootTime":1768033048,"procs":370,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 08:54:10.772550  323767 start.go:143] virtualization: kvm guest
	I0110 08:54:10.774548  323767 out.go:179] * [default-k8s-diff-port-225354] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 08:54:10.775930  323767 notify.go:221] Checking for updates...
	I0110 08:54:10.775987  323767 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 08:54:10.777327  323767 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 08:54:10.778691  323767 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:54:10.779869  323767 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-3641/.minikube
	I0110 08:54:10.780877  323767 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 08:54:10.782001  323767 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 08:54:10.783447  323767 config.go:182] Loaded profile config "default-k8s-diff-port-225354": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:54:10.784033  323767 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 08:54:10.808861  323767 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 08:54:10.808963  323767 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:54:10.865984  323767 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2026-01-10 08:54:10.855636003 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:54:10.866142  323767 docker.go:319] overlay module found
	I0110 08:54:10.867929  323767 out.go:179] * Using the docker driver based on existing profile
	I0110 08:54:10.869062  323767 start.go:309] selected driver: docker
	I0110 08:54:10.869077  323767 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-225354 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-225354 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:54:10.869184  323767 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 08:54:10.869926  323767 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:54:10.925502  323767 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2026-01-10 08:54:10.916316006 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:54:10.925807  323767 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 08:54:10.925849  323767 cni.go:84] Creating CNI manager for ""
	I0110 08:54:10.925905  323767 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 08:54:10.925939  323767 start.go:353] cluster config:
	{Name:default-k8s-diff-port-225354 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-225354 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:54:10.927799  323767 out.go:179] * Starting "default-k8s-diff-port-225354" primary control-plane node in "default-k8s-diff-port-225354" cluster
	I0110 08:54:10.928989  323767 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 08:54:10.930145  323767 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 08:54:10.931151  323767 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 08:54:10.931179  323767 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0110 08:54:10.931186  323767 cache.go:65] Caching tarball of preloaded images
	I0110 08:54:10.931185  323767 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 08:54:10.931262  323767 preload.go:251] Found /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0110 08:54:10.931274  323767 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 08:54:10.931366  323767 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/default-k8s-diff-port-225354/config.json ...
	I0110 08:54:10.952478  323767 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 08:54:10.952497  323767 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 08:54:10.952511  323767 cache.go:243] Successfully downloaded all kic artifacts
	I0110 08:54:10.952538  323767 start.go:360] acquireMachinesLock for default-k8s-diff-port-225354: {Name:mk6f4cf32f69b6a51f12f83adcd3cd0eb0ae8cbf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 08:54:10.952590  323767 start.go:364] duration metric: took 34.986µs to acquireMachinesLock for "default-k8s-diff-port-225354"
	I0110 08:54:10.952607  323767 start.go:96] Skipping create...Using existing machine configuration
	I0110 08:54:10.952614  323767 fix.go:54] fixHost starting: 
	I0110 08:54:10.952835  323767 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225354 --format={{.State.Status}}
	I0110 08:54:10.971677  323767 fix.go:112] recreateIfNeeded on default-k8s-diff-port-225354: state=Stopped err=<nil>
	W0110 08:54:10.971712  323767 fix.go:138] unexpected machine state, will restart: <nil>
	W0110 08:54:09.764911  319849 pod_ready.go:104] pod "coredns-7d764666f9-ss4nt" is not "Ready", error: <nil>
	W0110 08:54:12.264373  319849 pod_ready.go:104] pod "coredns-7d764666f9-ss4nt" is not "Ready", error: <nil>
	W0110 08:54:10.447913  313874 pod_ready.go:104] pod "coredns-7d764666f9-wpsnn" is not "Ready", error: <nil>
	I0110 08:54:12.447442  313874 pod_ready.go:94] pod "coredns-7d764666f9-wpsnn" is "Ready"
	I0110 08:54:12.447465  313874 pod_ready.go:86] duration metric: took 37.005475257s for pod "coredns-7d764666f9-wpsnn" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:54:12.450109  313874 pod_ready.go:83] waiting for pod "etcd-no-preload-095312" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:54:12.454228  313874 pod_ready.go:94] pod "etcd-no-preload-095312" is "Ready"
	I0110 08:54:12.454256  313874 pod_ready.go:86] duration metric: took 4.12175ms for pod "etcd-no-preload-095312" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:54:12.456424  313874 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-095312" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:54:12.460419  313874 pod_ready.go:94] pod "kube-apiserver-no-preload-095312" is "Ready"
	I0110 08:54:12.460442  313874 pod_ready.go:86] duration metric: took 3.995934ms for pod "kube-apiserver-no-preload-095312" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:54:12.462584  313874 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-095312" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:54:12.645718  313874 pod_ready.go:94] pod "kube-controller-manager-no-preload-095312" is "Ready"
	I0110 08:54:12.645758  313874 pod_ready.go:86] duration metric: took 183.153558ms for pod "kube-controller-manager-no-preload-095312" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:54:12.845858  313874 pod_ready.go:83] waiting for pod "kube-proxy-vrzf6" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:54:13.246243  313874 pod_ready.go:94] pod "kube-proxy-vrzf6" is "Ready"
	I0110 08:54:13.246269  313874 pod_ready.go:86] duration metric: took 400.386349ms for pod "kube-proxy-vrzf6" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:54:13.445337  313874 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-095312" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:54:13.845542  313874 pod_ready.go:94] pod "kube-scheduler-no-preload-095312" is "Ready"
	I0110 08:54:13.845566  313874 pod_ready.go:86] duration metric: took 400.206561ms for pod "kube-scheduler-no-preload-095312" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:54:13.845577  313874 pod_ready.go:40] duration metric: took 38.40686605s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 08:54:13.890931  313874 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0110 08:54:13.892708  313874 out.go:179] * Done! kubectl is now configured to use "no-preload-095312" cluster and "default" namespace by default
	I0110 08:54:10.973787  323767 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-225354" ...
	I0110 08:54:10.973853  323767 cli_runner.go:164] Run: docker start default-k8s-diff-port-225354
	I0110 08:54:11.238333  323767 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225354 --format={{.State.Status}}
	I0110 08:54:11.258016  323767 kic.go:430] container "default-k8s-diff-port-225354" state is running.
	I0110 08:54:11.258559  323767 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-225354
	I0110 08:54:11.280398  323767 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/default-k8s-diff-port-225354/config.json ...
	I0110 08:54:11.280702  323767 machine.go:94] provisionDockerMachine start ...
	I0110 08:54:11.280828  323767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225354
	I0110 08:54:11.301429  323767 main.go:144] libmachine: Using SSH client type: native
	I0110 08:54:11.301668  323767 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0110 08:54:11.301681  323767 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 08:54:11.302419  323767 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42400->127.0.0.1:33123: read: connection reset by peer
	I0110 08:54:14.431592  323767 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-225354
	
	I0110 08:54:14.431635  323767 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-225354"
	I0110 08:54:14.431702  323767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225354
	I0110 08:54:14.451318  323767 main.go:144] libmachine: Using SSH client type: native
	I0110 08:54:14.451515  323767 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0110 08:54:14.451527  323767 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-225354 && echo "default-k8s-diff-port-225354" | sudo tee /etc/hostname
	I0110 08:54:14.589004  323767 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-225354
	
	I0110 08:54:14.589083  323767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225354
	I0110 08:54:14.607514  323767 main.go:144] libmachine: Using SSH client type: native
	I0110 08:54:14.607721  323767 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0110 08:54:14.607763  323767 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-225354' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-225354/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-225354' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 08:54:14.737006  323767 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 08:54:14.737035  323767 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-3641/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-3641/.minikube}
	I0110 08:54:14.737067  323767 ubuntu.go:190] setting up certificates
	I0110 08:54:14.737089  323767 provision.go:84] configureAuth start
	I0110 08:54:14.737149  323767 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-225354
	I0110 08:54:14.756076  323767 provision.go:143] copyHostCerts
	I0110 08:54:14.756148  323767 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-3641/.minikube/ca.pem, removing ...
	I0110 08:54:14.756164  323767 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-3641/.minikube/ca.pem
	I0110 08:54:14.756236  323767 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-3641/.minikube/ca.pem (1078 bytes)
	I0110 08:54:14.756404  323767 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-3641/.minikube/cert.pem, removing ...
	I0110 08:54:14.756417  323767 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-3641/.minikube/cert.pem
	I0110 08:54:14.756450  323767 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-3641/.minikube/cert.pem (1123 bytes)
	I0110 08:54:14.756528  323767 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-3641/.minikube/key.pem, removing ...
	I0110 08:54:14.756537  323767 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-3641/.minikube/key.pem
	I0110 08:54:14.756563  323767 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-3641/.minikube/key.pem (1675 bytes)
	I0110 08:54:14.756647  323767 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-3641/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-225354 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-225354 localhost minikube]
	I0110 08:54:14.793509  323767 provision.go:177] copyRemoteCerts
	I0110 08:54:14.793560  323767 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 08:54:14.793595  323767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225354
	I0110 08:54:14.813116  323767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/default-k8s-diff-port-225354/id_rsa Username:docker}
	I0110 08:54:14.905947  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0110 08:54:14.924947  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 08:54:14.942427  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0110 08:54:14.959358  323767 provision.go:87] duration metric: took 222.24641ms to configureAuth
	I0110 08:54:14.959385  323767 ubuntu.go:206] setting minikube options for container-runtime
	I0110 08:54:14.959541  323767 config.go:182] Loaded profile config "default-k8s-diff-port-225354": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:54:14.959639  323767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225354
	I0110 08:54:14.978423  323767 main.go:144] libmachine: Using SSH client type: native
	I0110 08:54:14.978687  323767 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0110 08:54:14.978709  323767 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 08:54:15.292502  323767 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 08:54:15.292533  323767 machine.go:97] duration metric: took 4.011809959s to provisionDockerMachine
	I0110 08:54:15.292549  323767 start.go:293] postStartSetup for "default-k8s-diff-port-225354" (driver="docker")
	I0110 08:54:15.292564  323767 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 08:54:15.292642  323767 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 08:54:15.292693  323767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225354
	I0110 08:54:15.314158  323767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/default-k8s-diff-port-225354/id_rsa Username:docker}
	I0110 08:54:15.408580  323767 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 08:54:15.412461  323767 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 08:54:15.412484  323767 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 08:54:15.412494  323767 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-3641/.minikube/addons for local assets ...
	I0110 08:54:15.412543  323767 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-3641/.minikube/files for local assets ...
	I0110 08:54:15.412618  323767 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-3641/.minikube/files/etc/ssl/certs/71832.pem -> 71832.pem in /etc/ssl/certs
	I0110 08:54:15.412701  323767 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 08:54:15.420257  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/files/etc/ssl/certs/71832.pem --> /etc/ssl/certs/71832.pem (1708 bytes)
	I0110 08:54:15.437907  323767 start.go:296] duration metric: took 145.342731ms for postStartSetup
	I0110 08:54:15.437987  323767 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 08:54:15.438056  323767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225354
	I0110 08:54:15.456452  323767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/default-k8s-diff-port-225354/id_rsa Username:docker}
	I0110 08:54:15.547075  323767 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 08:54:15.551926  323767 fix.go:56] duration metric: took 4.599307206s for fixHost
	I0110 08:54:15.551952  323767 start.go:83] releasing machines lock for "default-k8s-diff-port-225354", held for 4.599352578s
	I0110 08:54:15.552009  323767 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-225354
	I0110 08:54:15.571390  323767 ssh_runner.go:195] Run: cat /version.json
	I0110 08:54:15.571479  323767 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 08:54:15.571492  323767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225354
	I0110 08:54:15.571536  323767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225354
	I0110 08:54:15.590047  323767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/default-k8s-diff-port-225354/id_rsa Username:docker}
	I0110 08:54:15.591127  323767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/default-k8s-diff-port-225354/id_rsa Username:docker}
	I0110 08:54:15.681002  323767 ssh_runner.go:195] Run: systemctl --version
	I0110 08:54:15.736158  323767 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 08:54:15.771411  323767 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 08:54:15.776401  323767 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 08:54:15.776474  323767 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 08:54:15.784643  323767 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 08:54:15.784665  323767 start.go:496] detecting cgroup driver to use...
	I0110 08:54:15.784700  323767 detect.go:178] detected "systemd" cgroup driver on host os
	I0110 08:54:15.784774  323767 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 08:54:15.799081  323767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 08:54:15.812276  323767 docker.go:218] disabling cri-docker service (if available) ...
	I0110 08:54:15.812336  323767 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 08:54:15.826890  323767 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 08:54:15.839388  323767 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 08:54:15.922811  323767 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 08:54:15.998942  323767 docker.go:234] disabling docker service ...
	I0110 08:54:15.999015  323767 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 08:54:16.014407  323767 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 08:54:16.026725  323767 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 08:54:16.107584  323767 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 08:54:16.187958  323767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 08:54:16.200970  323767 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 08:54:16.215874  323767 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 08:54:16.215939  323767 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:54:16.225363  323767 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0110 08:54:16.225421  323767 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:54:16.234046  323767 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:54:16.242715  323767 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:54:16.251754  323767 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 08:54:16.260507  323767 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:54:16.270006  323767 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:54:16.278297  323767 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:54:16.287021  323767 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 08:54:16.295062  323767 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 08:54:16.302531  323767 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:54:16.386036  323767 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 08:54:16.519040  323767 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 08:54:16.519096  323767 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 08:54:16.523210  323767 start.go:574] Will wait 60s for crictl version
	I0110 08:54:16.523262  323767 ssh_runner.go:195] Run: which crictl
	I0110 08:54:16.526960  323767 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 08:54:16.555412  323767 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 08:54:16.555483  323767 ssh_runner.go:195] Run: crio --version
	I0110 08:54:16.583901  323767 ssh_runner.go:195] Run: crio --version
	I0110 08:54:16.612570  323767 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 08:54:16.613832  323767 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-225354 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 08:54:16.631782  323767 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0110 08:54:16.636032  323767 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 08:54:16.646878  323767 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-225354 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-225354 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 08:54:16.646997  323767 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 08:54:16.647043  323767 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 08:54:16.681410  323767 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 08:54:16.681432  323767 crio.go:433] Images already preloaded, skipping extraction
	I0110 08:54:16.681488  323767 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 08:54:16.709542  323767 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 08:54:16.709564  323767 cache_images.go:86] Images are preloaded, skipping loading
	I0110 08:54:16.709578  323767 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.35.0 crio true true} ...
	I0110 08:54:16.709686  323767 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-225354 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-225354 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 08:54:16.709773  323767 ssh_runner.go:195] Run: crio config
	I0110 08:54:16.757583  323767 cni.go:84] Creating CNI manager for ""
	I0110 08:54:16.757609  323767 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 08:54:16.757627  323767 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 08:54:16.757647  323767 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-225354 NodeName:default-k8s-diff-port-225354 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 08:54:16.757801  323767 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-225354"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 08:54:16.757897  323767 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 08:54:16.767516  323767 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 08:54:16.767578  323767 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 08:54:16.775454  323767 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0110 08:54:16.788355  323767 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 08:54:16.801342  323767 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0110 08:54:16.814642  323767 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0110 08:54:16.819369  323767 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 08:54:16.829406  323767 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:54:16.909443  323767 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 08:54:16.933270  323767 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/default-k8s-diff-port-225354 for IP: 192.168.85.2
	I0110 08:54:16.933296  323767 certs.go:195] generating shared ca certs ...
	I0110 08:54:16.933320  323767 certs.go:227] acquiring lock for ca certs: {Name:mk00e261408d0e9fd9be39128613c5110a764de2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:54:16.933503  323767 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-3641/.minikube/ca.key
	I0110 08:54:16.933570  323767 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-3641/.minikube/proxy-client-ca.key
	I0110 08:54:16.933585  323767 certs.go:257] generating profile certs ...
	I0110 08:54:16.933711  323767 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/default-k8s-diff-port-225354/client.key
	I0110 08:54:16.933843  323767 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/default-k8s-diff-port-225354/apiserver.key.b2f93262
	I0110 08:54:16.933914  323767 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/default-k8s-diff-port-225354/proxy-client.key
	I0110 08:54:16.934071  323767 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/7183.pem (1338 bytes)
	W0110 08:54:16.934116  323767 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-3641/.minikube/certs/7183_empty.pem, impossibly tiny 0 bytes
	I0110 08:54:16.934130  323767 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca-key.pem (1675 bytes)
	I0110 08:54:16.934171  323767 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem (1078 bytes)
	I0110 08:54:16.934216  323767 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/cert.pem (1123 bytes)
	I0110 08:54:16.934253  323767 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/key.pem (1675 bytes)
	I0110 08:54:16.934322  323767 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/files/etc/ssl/certs/71832.pem (1708 bytes)
	I0110 08:54:16.935216  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 08:54:16.954242  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0110 08:54:16.973102  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 08:54:16.991857  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 08:54:17.016862  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/default-k8s-diff-port-225354/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0110 08:54:17.038329  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/default-k8s-diff-port-225354/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0110 08:54:17.058014  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/default-k8s-diff-port-225354/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 08:54:17.078592  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/default-k8s-diff-port-225354/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 08:54:17.097918  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/certs/7183.pem --> /usr/share/ca-certificates/7183.pem (1338 bytes)
	I0110 08:54:17.120708  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/files/etc/ssl/certs/71832.pem --> /usr/share/ca-certificates/71832.pem (1708 bytes)
	I0110 08:54:17.139003  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 08:54:17.156524  323767 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 08:54:17.168668  323767 ssh_runner.go:195] Run: openssl version
	I0110 08:54:17.175368  323767 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7183.pem
	I0110 08:54:17.182747  323767 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7183.pem /etc/ssl/certs/7183.pem
	I0110 08:54:17.190691  323767 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7183.pem
	I0110 08:54:17.194457  323767 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 08:23 /usr/share/ca-certificates/7183.pem
	I0110 08:54:17.194502  323767 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7183.pem
	I0110 08:54:17.228543  323767 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 08:54:17.236153  323767 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/71832.pem
	I0110 08:54:17.243355  323767 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/71832.pem /etc/ssl/certs/71832.pem
	I0110 08:54:17.250614  323767 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71832.pem
	I0110 08:54:17.254314  323767 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 08:23 /usr/share/ca-certificates/71832.pem
	I0110 08:54:17.254360  323767 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71832.pem
	I0110 08:54:17.291080  323767 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 08:54:17.299045  323767 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:54:17.306390  323767 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 08:54:17.314035  323767 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:54:17.317953  323767 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:54:17.318000  323767 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:54:17.355000  323767 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 08:54:17.362980  323767 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 08:54:17.367171  323767 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 08:54:17.402450  323767 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 08:54:17.439845  323767 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 08:54:17.488989  323767 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 08:54:17.547905  323767 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 08:54:17.597239  323767 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 08:54:17.641402  323767 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-225354 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-225354 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:54:17.641512  323767 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 08:54:17.641568  323767 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 08:54:17.671428  323767 cri.go:96] found id: "85fbcb73a888a911d321e3d1ed0152e1aa93447d76ca22015d3a09638892f2af"
	I0110 08:54:17.671452  323767 cri.go:96] found id: "6de83a52f42b4d00ef4463aa0a10635035e611d92fcb5f692497cd23e40d7676"
	I0110 08:54:17.671467  323767 cri.go:96] found id: "767f06c98be9d86d55d0cbaaa375406db22fd312258e490654cdcba950d47c27"
	I0110 08:54:17.671472  323767 cri.go:96] found id: "5055dfe1945b7e474350afd64ade8604c08027a381ce57320b00e445ef977a5c"
	I0110 08:54:17.671475  323767 cri.go:96] found id: ""
	I0110 08:54:17.671511  323767 ssh_runner.go:195] Run: sudo runc list -f json
	W0110 08:54:17.683716  323767 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:54:17Z" level=error msg="open /run/runc: no such file or directory"
	I0110 08:54:17.683818  323767 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 08:54:17.692500  323767 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 08:54:17.692519  323767 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 08:54:17.692563  323767 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 08:54:17.700875  323767 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 08:54:17.702210  323767 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-225354" does not appear in /home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:54:17.703105  323767 kubeconfig.go:62] /home/jenkins/minikube-integration/22427-3641/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-225354" cluster setting kubeconfig missing "default-k8s-diff-port-225354" context setting]
	I0110 08:54:17.704607  323767 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/kubeconfig: {Name:mk0d29b3b0ee1fd71729aff31f901be50a2d0664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:54:17.706397  323767 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 08:54:17.714287  323767 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I0110 08:54:17.714312  323767 kubeadm.go:602] duration metric: took 21.788411ms to restartPrimaryControlPlane
	I0110 08:54:17.714319  323767 kubeadm.go:403] duration metric: took 72.928609ms to StartCluster
	I0110 08:54:17.714335  323767 settings.go:142] acquiring lock: {Name:mkbb32fc6441ceb31ce2923ea8999f8375298f08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:54:17.714398  323767 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:54:17.715957  323767 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/kubeconfig: {Name:mk0d29b3b0ee1fd71729aff31f901be50a2d0664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:54:17.716233  323767 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 08:54:17.716303  323767 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 08:54:17.716385  323767 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-225354"
	I0110 08:54:17.716404  323767 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-225354"
	W0110 08:54:17.716410  323767 addons.go:248] addon storage-provisioner should already be in state true
	I0110 08:54:17.716433  323767 host.go:66] Checking if "default-k8s-diff-port-225354" exists ...
	I0110 08:54:17.716458  323767 config.go:182] Loaded profile config "default-k8s-diff-port-225354": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:54:17.716558  323767 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-225354"
	I0110 08:54:17.716606  323767 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-225354"
	I0110 08:54:17.716526  323767 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-225354"
	I0110 08:54:17.716694  323767 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-225354"
	W0110 08:54:17.716710  323767 addons.go:248] addon dashboard should already be in state true
	I0110 08:54:17.716747  323767 host.go:66] Checking if "default-k8s-diff-port-225354" exists ...
	I0110 08:54:17.716965  323767 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225354 --format={{.State.Status}}
	I0110 08:54:17.716965  323767 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225354 --format={{.State.Status}}
	I0110 08:54:17.717413  323767 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225354 --format={{.State.Status}}
	I0110 08:54:17.721904  323767 out.go:179] * Verifying Kubernetes components...
	I0110 08:54:17.723462  323767 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:54:17.745550  323767 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 08:54:17.745608  323767 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0110 08:54:17.746683  323767 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 08:54:17.746701  323767 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0110 08:54:17.746704  323767 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	W0110 08:54:14.265096  319849 pod_ready.go:104] pod "coredns-7d764666f9-ss4nt" is not "Ready", error: <nil>
	W0110 08:54:16.764367  319849 pod_ready.go:104] pod "coredns-7d764666f9-ss4nt" is not "Ready", error: <nil>
	I0110 08:54:17.746812  323767 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-225354"
	W0110 08:54:17.746828  323767 addons.go:248] addon default-storageclass should already be in state true
	I0110 08:54:17.746787  323767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225354
	I0110 08:54:17.746853  323767 host.go:66] Checking if "default-k8s-diff-port-225354" exists ...
	I0110 08:54:17.747311  323767 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225354 --format={{.State.Status}}
	I0110 08:54:17.747889  323767 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0110 08:54:17.747930  323767 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0110 08:54:17.747987  323767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225354
	I0110 08:54:17.783552  323767 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 08:54:17.783576  323767 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 08:54:17.783630  323767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225354
	I0110 08:54:17.783875  323767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/default-k8s-diff-port-225354/id_rsa Username:docker}
	I0110 08:54:17.785980  323767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/default-k8s-diff-port-225354/id_rsa Username:docker}
	I0110 08:54:17.810678  323767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/default-k8s-diff-port-225354/id_rsa Username:docker}
	I0110 08:54:17.872366  323767 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 08:54:17.887886  323767 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-225354" to be "Ready" ...
	I0110 08:54:17.898099  323767 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 08:54:17.903229  323767 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0110 08:54:17.903253  323767 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0110 08:54:17.917337  323767 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0110 08:54:17.917360  323767 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0110 08:54:17.921609  323767 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 08:54:17.933302  323767 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0110 08:54:17.933326  323767 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0110 08:54:17.947626  323767 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0110 08:54:17.947646  323767 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0110 08:54:17.960408  323767 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0110 08:54:17.960475  323767 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0110 08:54:17.974266  323767 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0110 08:54:17.974295  323767 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0110 08:54:17.986776  323767 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0110 08:54:17.986799  323767 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0110 08:54:17.999337  323767 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0110 08:54:17.999358  323767 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0110 08:54:18.011953  323767 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 08:54:18.011978  323767 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0110 08:54:18.024936  323767 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 08:54:19.541880  323767 node_ready.go:49] node "default-k8s-diff-port-225354" is "Ready"
	I0110 08:54:19.541922  323767 node_ready.go:38] duration metric: took 1.653997821s for node "default-k8s-diff-port-225354" to be "Ready" ...
	I0110 08:54:19.541939  323767 api_server.go:52] waiting for apiserver process to appear ...
	I0110 08:54:19.541994  323767 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 08:54:20.082614  323767 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.184477569s)
	I0110 08:54:20.082684  323767 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.161045842s)
	I0110 08:54:20.082816  323767 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.057848718s)
	I0110 08:54:20.082870  323767 api_server.go:72] duration metric: took 2.366605517s to wait for apiserver process to appear ...
	I0110 08:54:20.082941  323767 api_server.go:88] waiting for apiserver healthz status ...
	I0110 08:54:20.082962  323767 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0110 08:54:20.084235  323767 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-225354 addons enable metrics-server
	
	I0110 08:54:20.087836  323767 api_server.go:325] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 08:54:20.087861  323767 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0110 08:54:20.091293  323767 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I0110 08:54:20.092886  323767 addons.go:530] duration metric: took 2.376597654s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I0110 08:54:20.583799  323767 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0110 08:54:20.588631  323767 api_server.go:325] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 08:54:20.588668  323767 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	
	
	==> CRI-O <==
	Jan 10 08:53:51 old-k8s-version-093083 crio[572]: time="2026-01-10T08:53:51.372233058Z" level=info msg="Created container e605b843e423ec01dd112548d07dcbdec1954f9df6ba09936c682d71de576f93: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-dtt5w/kubernetes-dashboard" id=07903371-8da8-495a-afd7-b8ab70043b06 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:53:51 old-k8s-version-093083 crio[572]: time="2026-01-10T08:53:51.372765099Z" level=info msg="Starting container: e605b843e423ec01dd112548d07dcbdec1954f9df6ba09936c682d71de576f93" id=51801ec2-cefd-495a-9fe2-ded1f7ca23f7 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 08:53:51 old-k8s-version-093083 crio[572]: time="2026-01-10T08:53:51.374609171Z" level=info msg="Started container" PID=1764 containerID=e605b843e423ec01dd112548d07dcbdec1954f9df6ba09936c682d71de576f93 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-dtt5w/kubernetes-dashboard id=51801ec2-cefd-495a-9fe2-ded1f7ca23f7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c11c917910354b25b044942572de015401cf3513251c5b987890e899751ca5d4
	Jan 10 08:54:04 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:04.3942689Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a7917da4-d483-41bc-b23e-c41b1ccfc4a6 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:54:04 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:04.395253651Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c01dee5a-c62c-400e-afd9-2039ea7da28f name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:54:04 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:04.396439658Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=e013fcf0-f850-4bdf-9693-8e9639a18bc4 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:54:04 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:04.396579566Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:04 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:04.401280443Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:04 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:04.401469683Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/92e0e38d53ff4c048f7a1f153db9ca954d5ac940056dd1e118d05feca6ee27f1/merged/etc/passwd: no such file or directory"
	Jan 10 08:54:04 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:04.401506885Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/92e0e38d53ff4c048f7a1f153db9ca954d5ac940056dd1e118d05feca6ee27f1/merged/etc/group: no such file or directory"
	Jan 10 08:54:04 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:04.401827098Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:04 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:04.441579603Z" level=info msg="Created container cf2021f237b7a23412332d927f1e3fc61448f19ce766d0d96d5a317c9855bb65: kube-system/storage-provisioner/storage-provisioner" id=e013fcf0-f850-4bdf-9693-8e9639a18bc4 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:54:04 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:04.442203505Z" level=info msg="Starting container: cf2021f237b7a23412332d927f1e3fc61448f19ce766d0d96d5a317c9855bb65" id=f58c7891-3745-4c82-a4e7-7829a174e14c name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 08:54:04 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:04.444451377Z" level=info msg="Started container" PID=1787 containerID=cf2021f237b7a23412332d927f1e3fc61448f19ce766d0d96d5a317c9855bb65 description=kube-system/storage-provisioner/storage-provisioner id=f58c7891-3745-4c82-a4e7-7829a174e14c name=/runtime.v1.RuntimeService/StartContainer sandboxID=7804ad23ae100420b531129f18a5a98e25b1d068c481ea99a9d4d0a1688346b6
	Jan 10 08:54:09 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:09.280966335Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=84e35603-001d-4999-baf2-8d12e7917f78 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:54:09 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:09.282021851Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9b3b19d8-3690-4771-814c-c59d1f2e9fa5 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:54:09 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:09.282916237Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h2jqh/dashboard-metrics-scraper" id=092e25e2-d699-4136-bae6-54833863b7de name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:54:09 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:09.283052294Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:09 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:09.288824393Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:09 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:09.289356686Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:09 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:09.31819171Z" level=info msg="Created container 1aa38e065133620305b9ea5ba3cc57e2dd22a6a1fcb024cc2b81ed4b8495d94a: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h2jqh/dashboard-metrics-scraper" id=092e25e2-d699-4136-bae6-54833863b7de name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:54:09 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:09.318806394Z" level=info msg="Starting container: 1aa38e065133620305b9ea5ba3cc57e2dd22a6a1fcb024cc2b81ed4b8495d94a" id=78002839-7915-4ad3-bfcd-9e0a3016379b name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 08:54:09 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:09.321022927Z" level=info msg="Started container" PID=1825 containerID=1aa38e065133620305b9ea5ba3cc57e2dd22a6a1fcb024cc2b81ed4b8495d94a description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h2jqh/dashboard-metrics-scraper id=78002839-7915-4ad3-bfcd-9e0a3016379b name=/runtime.v1.RuntimeService/StartContainer sandboxID=8b541984d6d40c1cd5c80f754ca07e560ec8853186a1581abb8783eded58abb2
	Jan 10 08:54:09 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:09.411011114Z" level=info msg="Removing container: dabc28c95171c3213864a92f325ebb091da4392b8a2afb9551c9f3cbfe48d2d1" id=0e77320e-867a-4f16-acc8-2bf02579fe54 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 08:54:09 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:09.419632285Z" level=info msg="Removed container dabc28c95171c3213864a92f325ebb091da4392b8a2afb9551c9f3cbfe48d2d1: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h2jqh/dashboard-metrics-scraper" id=0e77320e-867a-4f16-acc8-2bf02579fe54 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	1aa38e0651336       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           12 seconds ago      Exited              dashboard-metrics-scraper   2                   8b541984d6d40       dashboard-metrics-scraper-5f989dc9cf-h2jqh       kubernetes-dashboard
	cf2021f237b7a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           17 seconds ago      Running             storage-provisioner         1                   7804ad23ae100       storage-provisioner                              kube-system
	e605b843e423e       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   30 seconds ago      Running             kubernetes-dashboard        0                   c11c917910354       kubernetes-dashboard-8694d4445c-dtt5w            kubernetes-dashboard
	baf39ba3661d5       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           47 seconds ago      Running             coredns                     0                   466b958d31913       coredns-5dd5756b68-sscts                         kube-system
	d7b14d87548cc       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           47 seconds ago      Running             busybox                     1                   28bf08291f841       busybox                                          default
	be67c5d068c39       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           47 seconds ago      Running             kindnet-cni                 0                   a5effcbbd7e5a       kindnet-nn64b                                    kube-system
	b731c34ce0dc7       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           47 seconds ago      Running             kube-proxy                  0                   c15fea7f8fed5       kube-proxy-r7qzb                                 kube-system
	d4e39023b5120       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           47 seconds ago      Exited              storage-provisioner         0                   7804ad23ae100       storage-provisioner                              kube-system
	dd24142d01693       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           51 seconds ago      Running             kube-scheduler              0                   05d2877df5917       kube-scheduler-old-k8s-version-093083            kube-system
	77835742c9e4e       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           51 seconds ago      Running             kube-controller-manager     0                   69753a0f1a861       kube-controller-manager-old-k8s-version-093083   kube-system
	45c05fec75b01       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           51 seconds ago      Running             etcd                        0                   f51c02cd2bc04       etcd-old-k8s-version-093083                      kube-system
	a34eedbb84c37       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           51 seconds ago      Running             kube-apiserver              0                   41e628aecc10e       kube-apiserver-old-k8s-version-093083            kube-system
	
	
	==> coredns [baf39ba3661d5e4554402c182c54915a779e9c2589d4896b8c74b818186d2e2e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:49774 - 4918 "HINFO IN 4927896878841463775.316847351234262986. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.015460235s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-093083
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-093083
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee
	                    minikube.k8s.io/name=old-k8s-version-093083
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T08_52_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 08:52:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-093083
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 08:54:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 08:54:03 +0000   Sat, 10 Jan 2026 08:52:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 08:54:03 +0000   Sat, 10 Jan 2026 08:52:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 08:54:03 +0000   Sat, 10 Jan 2026 08:52:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 08:54:03 +0000   Sat, 10 Jan 2026 08:52:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-093083
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 73d0d7ed7bc783a051b563196960b035
	  System UUID:                c7a82a71-54f6-4520-9c6e-142f796b8561
	  Boot ID:                    7c12f492-59b6-440b-986c-741ece916a23
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 coredns-5dd5756b68-sscts                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     103s
	  kube-system                 etcd-old-k8s-version-093083                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         116s
	  kube-system                 kindnet-nn64b                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-old-k8s-version-093083             250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-old-k8s-version-093083    200m (2%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-r7qzb                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-old-k8s-version-093083             100m (1%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-h2jqh        0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-dtt5w             0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 103s               kube-proxy       
	  Normal  Starting                 47s                kube-proxy       
	  Normal  NodeHasSufficientMemory  116s               kubelet          Node old-k8s-version-093083 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s               kubelet          Node old-k8s-version-093083 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s               kubelet          Node old-k8s-version-093083 status is now: NodeHasSufficientPID
	  Normal  Starting                 116s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           104s               node-controller  Node old-k8s-version-093083 event: Registered Node old-k8s-version-093083 in Controller
	  Normal  NodeReady                90s                kubelet          Node old-k8s-version-093083 status is now: NodeReady
	  Normal  Starting                 52s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  52s (x8 over 52s)  kubelet          Node old-k8s-version-093083 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x8 over 52s)  kubelet          Node old-k8s-version-093083 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x8 over 52s)  kubelet          Node old-k8s-version-093083 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           36s                node-controller  Node old-k8s-version-093083 event: Registered Node old-k8s-version-093083 in Controller
	
	
	==> dmesg <==
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 4a 03 46 88 66 4e 08 06
	[  +6.406566] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7a 4f c9 a1 45 f4 08 06
	[  +0.001429] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca 5b 5b a9 7c eb 08 06
	[ +24.002020] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7a 6f 49 7e 9d d1 08 06
	[  +0.000366] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e 3a ac 18 98 c9 08 06
	[Jan10 08:52] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4a 05 5c 86 cf df 08 06
	[  +0.000337] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 7a 4f c9 a1 45 f4 08 06
	[  +0.000602] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca 5b 5b a9 7c eb 08 06
	[  +0.739059] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e 32 81 dc 0a 7c 08 06
	[  +1.988700] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 6e ec e4 aa 3d 08 06
	[ +20.040962] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 ac f8 cf fb 84 08 06
	[  +0.000375] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 6e ec e4 aa 3d 08 06
	[  +0.000915] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 32 81 dc 0a 7c 08 06
	
	
	==> etcd [45c05fec75b0148f5c10bc223ec4f3c0de54145a816e2131a615a16966edecc9] <==
	{"level":"info","ts":"2026-01-10T08:53:29.865903Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T08:53:29.86594Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T08:53:29.865773Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2026-01-10T08:53:29.866256Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2026-01-10T08:53:29.866574Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-10T08:53:29.866759Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-10T08:53:29.868358Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2026-01-10T08:53:29.868626Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-10T08:53:29.868692Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-10T08:53:29.868496Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2026-01-10T08:53:29.869239Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2026-01-10T08:53:31.353688Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T08:53:31.353763Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T08:53:31.353788Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2026-01-10T08:53:31.353807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T08:53:31.353816Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2026-01-10T08:53:31.353828Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2026-01-10T08:53:31.353838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2026-01-10T08:53:31.354813Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-093083 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T08:53:31.354845Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T08:53:31.354846Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T08:53:31.355053Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T08:53:31.355083Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T08:53:31.356169Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2026-01-10T08:53:31.356427Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 08:54:21 up 36 min,  0 user,  load average: 4.19, 3.97, 2.61
	Linux old-k8s-version-093083 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [be67c5d068c39c6b842d3d6eabf77430720ba8af75b51a3ea51b3a1d05abe021] <==
	I0110 08:53:33.912869       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 08:53:33.913323       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I0110 08:53:33.913518       1 main.go:148] setting mtu 1500 for CNI 
	I0110 08:53:33.913542       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 08:53:33.913564       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T08:53:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 08:53:34.116325       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 08:53:34.116357       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 08:53:34.116370       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 08:53:34.208269       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 08:53:34.507530       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 08:53:34.507561       1 metrics.go:72] Registering metrics
	I0110 08:53:34.507870       1 controller.go:711] "Syncing nftables rules"
	I0110 08:53:44.116110       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0110 08:53:44.116192       1 main.go:301] handling current node
	I0110 08:53:54.115876       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0110 08:53:54.115930       1 main.go:301] handling current node
	I0110 08:54:04.115903       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0110 08:54:04.116064       1 main.go:301] handling current node
	I0110 08:54:14.115428       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0110 08:54:14.115461       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a34eedbb84c37c48a1a753a25087a48c8b7295f35503fe8d9738f819582226fa] <==
	I0110 08:53:32.570582       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0110 08:53:32.571792       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0110 08:53:32.571865       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0110 08:53:32.571934       1 aggregator.go:166] initial CRD sync complete...
	I0110 08:53:32.571954       1 autoregister_controller.go:141] Starting autoregister controller
	I0110 08:53:32.571958       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0110 08:53:32.571962       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0110 08:53:32.571974       1 cache.go:39] Caches are synced for autoregister controller
	I0110 08:53:32.572033       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0110 08:53:32.572033       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0110 08:53:32.572197       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0110 08:53:32.574228       1 shared_informer.go:318] Caches are synced for configmaps
	E0110 08:53:32.577876       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0110 08:53:33.476196       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0110 08:53:33.500519       1 controller.go:624] quota admission added evaluator for: namespaces
	I0110 08:53:33.531703       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0110 08:53:33.549066       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 08:53:33.556708       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 08:53:33.566794       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0110 08:53:33.638231       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.187.253"}
	I0110 08:53:33.657164       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.221.136"}
	I0110 08:53:44.985637       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 08:53:45.006839       1 controller.go:624] quota admission added evaluator for: endpoints
	I0110 08:53:45.031110       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0110 08:53:45.031111       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [77835742c9e4e9169a8997b0af913ac31071a741ce055172f48d6e40e8bb0dfa] <==
	I0110 08:53:45.053242       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="17.097663ms"
	I0110 08:53:45.053327       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="17.21119ms"
	I0110 08:53:45.062614       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="9.231422ms"
	I0110 08:53:45.062693       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="43.574µs"
	I0110 08:53:45.064663       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="11.315156ms"
	I0110 08:53:45.064824       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="42.67µs"
	I0110 08:53:45.073799       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="81.953µs"
	I0110 08:53:45.083755       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="94.314µs"
	I0110 08:53:45.109017       1 shared_informer.go:318] Caches are synced for PV protection
	I0110 08:53:45.110145       1 shared_informer.go:318] Caches are synced for persistent volume
	I0110 08:53:45.112397       1 shared_informer.go:318] Caches are synced for attach detach
	I0110 08:53:45.157239       1 shared_informer.go:318] Caches are synced for resource quota
	I0110 08:53:45.237365       1 shared_informer.go:318] Caches are synced for resource quota
	I0110 08:53:45.555380       1 shared_informer.go:318] Caches are synced for garbage collector
	I0110 08:53:45.588021       1 shared_informer.go:318] Caches are synced for garbage collector
	I0110 08:53:45.588071       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0110 08:53:48.364501       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="62.444µs"
	I0110 08:53:49.371586       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="101.712µs"
	I0110 08:53:50.372897       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.469µs"
	I0110 08:53:52.385548       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="5.93216ms"
	I0110 08:53:52.385654       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="67.557µs"
	I0110 08:54:05.397996       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.984109ms"
	I0110 08:54:05.398132       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="87.359µs"
	I0110 08:54:09.421188       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="116.742µs"
	I0110 08:54:15.371810       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="99.206µs"
	
	
	==> kube-proxy [b731c34ce0dc7be8cef547822c5515bcc237062ed93d07c59c1bf099151ddcd5] <==
	I0110 08:53:33.743911       1 server_others.go:69] "Using iptables proxy"
	I0110 08:53:33.763183       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I0110 08:53:33.791309       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 08:53:33.793748       1 server_others.go:152] "Using iptables Proxier"
	I0110 08:53:33.793787       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0110 08:53:33.793798       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0110 08:53:33.793840       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0110 08:53:33.794140       1 server.go:846] "Version info" version="v1.28.0"
	I0110 08:53:33.794157       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 08:53:33.794852       1 config.go:188] "Starting service config controller"
	I0110 08:53:33.794930       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0110 08:53:33.795038       1 config.go:315] "Starting node config controller"
	I0110 08:53:33.795056       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0110 08:53:33.794883       1 config.go:97] "Starting endpoint slice config controller"
	I0110 08:53:33.795113       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0110 08:53:33.895864       1 shared_informer.go:318] Caches are synced for node config
	I0110 08:53:33.895913       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0110 08:53:33.895933       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [dd24142d016939bba737e4aa2e124d9cca83e550da432b869538feff1f575331] <==
	I0110 08:53:30.655629       1 serving.go:348] Generated self-signed cert in-memory
	I0110 08:53:32.535295       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I0110 08:53:32.535611       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 08:53:32.539212       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0110 08:53:32.539232       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 08:53:32.539243       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0110 08:53:32.539251       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0110 08:53:32.539286       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0110 08:53:32.539318       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0110 08:53:32.540320       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0110 08:53:32.540397       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0110 08:53:32.640396       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0110 08:53:32.640426       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0110 08:53:32.640407       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	
	
	==> kubelet <==
	Jan 10 08:53:45 old-k8s-version-093083 kubelet[731]: I0110 08:53:45.176132     731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh4pt\" (UniqueName: \"kubernetes.io/projected/47cbad00-43d9-47bc-93f9-87616b11c240-kube-api-access-rh4pt\") pod \"dashboard-metrics-scraper-5f989dc9cf-h2jqh\" (UID: \"47cbad00-43d9-47bc-93f9-87616b11c240\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h2jqh"
	Jan 10 08:53:45 old-k8s-version-093083 kubelet[731]: I0110 08:53:45.176178     731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8a543484-64c8-459a-9754-8b99619ce408-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-dtt5w\" (UID: \"8a543484-64c8-459a-9754-8b99619ce408\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-dtt5w"
	Jan 10 08:53:45 old-k8s-version-093083 kubelet[731]: I0110 08:53:45.176204     731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/47cbad00-43d9-47bc-93f9-87616b11c240-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-h2jqh\" (UID: \"47cbad00-43d9-47bc-93f9-87616b11c240\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h2jqh"
	Jan 10 08:53:45 old-k8s-version-093083 kubelet[731]: I0110 08:53:45.176233     731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fddjv\" (UniqueName: \"kubernetes.io/projected/8a543484-64c8-459a-9754-8b99619ce408-kube-api-access-fddjv\") pod \"kubernetes-dashboard-8694d4445c-dtt5w\" (UID: \"8a543484-64c8-459a-9754-8b99619ce408\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-dtt5w"
	Jan 10 08:53:48 old-k8s-version-093083 kubelet[731]: I0110 08:53:48.350615     731 scope.go:117] "RemoveContainer" containerID="607c9ddefff94db09f45a0f2c76d05e224b1ea85c72a367070e93aa147971ba4"
	Jan 10 08:53:49 old-k8s-version-093083 kubelet[731]: I0110 08:53:49.355238     731 scope.go:117] "RemoveContainer" containerID="607c9ddefff94db09f45a0f2c76d05e224b1ea85c72a367070e93aa147971ba4"
	Jan 10 08:53:49 old-k8s-version-093083 kubelet[731]: I0110 08:53:49.355529     731 scope.go:117] "RemoveContainer" containerID="dabc28c95171c3213864a92f325ebb091da4392b8a2afb9551c9f3cbfe48d2d1"
	Jan 10 08:53:49 old-k8s-version-093083 kubelet[731]: E0110 08:53:49.355961     731 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-h2jqh_kubernetes-dashboard(47cbad00-43d9-47bc-93f9-87616b11c240)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h2jqh" podUID="47cbad00-43d9-47bc-93f9-87616b11c240"
	Jan 10 08:53:50 old-k8s-version-093083 kubelet[731]: I0110 08:53:50.359185     731 scope.go:117] "RemoveContainer" containerID="dabc28c95171c3213864a92f325ebb091da4392b8a2afb9551c9f3cbfe48d2d1"
	Jan 10 08:53:50 old-k8s-version-093083 kubelet[731]: E0110 08:53:50.359635     731 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-h2jqh_kubernetes-dashboard(47cbad00-43d9-47bc-93f9-87616b11c240)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h2jqh" podUID="47cbad00-43d9-47bc-93f9-87616b11c240"
	Jan 10 08:53:52 old-k8s-version-093083 kubelet[731]: I0110 08:53:52.379560     731 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-dtt5w" podStartSLOduration=1.433085441 podCreationTimestamp="2026-01-10 08:53:45 +0000 UTC" firstStartedPulling="2026-01-10 08:53:45.392919221 +0000 UTC m=+16.220335646" lastFinishedPulling="2026-01-10 08:53:51.339314001 +0000 UTC m=+22.166730438" observedRunningTime="2026-01-10 08:53:52.379107433 +0000 UTC m=+23.206523875" watchObservedRunningTime="2026-01-10 08:53:52.379480233 +0000 UTC m=+23.206896675"
	Jan 10 08:53:55 old-k8s-version-093083 kubelet[731]: I0110 08:53:55.362157     731 scope.go:117] "RemoveContainer" containerID="dabc28c95171c3213864a92f325ebb091da4392b8a2afb9551c9f3cbfe48d2d1"
	Jan 10 08:53:55 old-k8s-version-093083 kubelet[731]: E0110 08:53:55.362463     731 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-h2jqh_kubernetes-dashboard(47cbad00-43d9-47bc-93f9-87616b11c240)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h2jqh" podUID="47cbad00-43d9-47bc-93f9-87616b11c240"
	Jan 10 08:54:04 old-k8s-version-093083 kubelet[731]: I0110 08:54:04.393687     731 scope.go:117] "RemoveContainer" containerID="d4e39023b51206e0be48a97c34283a8e61c92fde7bfdd6b8d4de4724d840f8df"
	Jan 10 08:54:09 old-k8s-version-093083 kubelet[731]: I0110 08:54:09.280306     731 scope.go:117] "RemoveContainer" containerID="dabc28c95171c3213864a92f325ebb091da4392b8a2afb9551c9f3cbfe48d2d1"
	Jan 10 08:54:09 old-k8s-version-093083 kubelet[731]: I0110 08:54:09.409761     731 scope.go:117] "RemoveContainer" containerID="dabc28c95171c3213864a92f325ebb091da4392b8a2afb9551c9f3cbfe48d2d1"
	Jan 10 08:54:09 old-k8s-version-093083 kubelet[731]: I0110 08:54:09.410009     731 scope.go:117] "RemoveContainer" containerID="1aa38e065133620305b9ea5ba3cc57e2dd22a6a1fcb024cc2b81ed4b8495d94a"
	Jan 10 08:54:09 old-k8s-version-093083 kubelet[731]: E0110 08:54:09.410352     731 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-h2jqh_kubernetes-dashboard(47cbad00-43d9-47bc-93f9-87616b11c240)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h2jqh" podUID="47cbad00-43d9-47bc-93f9-87616b11c240"
	Jan 10 08:54:15 old-k8s-version-093083 kubelet[731]: I0110 08:54:15.361875     731 scope.go:117] "RemoveContainer" containerID="1aa38e065133620305b9ea5ba3cc57e2dd22a6a1fcb024cc2b81ed4b8495d94a"
	Jan 10 08:54:15 old-k8s-version-093083 kubelet[731]: E0110 08:54:15.362290     731 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-h2jqh_kubernetes-dashboard(47cbad00-43d9-47bc-93f9-87616b11c240)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h2jqh" podUID="47cbad00-43d9-47bc-93f9-87616b11c240"
	Jan 10 08:54:19 old-k8s-version-093083 kubelet[731]: I0110 08:54:19.326154     731 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Jan 10 08:54:19 old-k8s-version-093083 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 08:54:19 old-k8s-version-093083 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 08:54:19 old-k8s-version-093083 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 08:54:19 old-k8s-version-093083 systemd[1]: kubelet.service: Consumed 1.493s CPU time.
	
	
	==> kubernetes-dashboard [e605b843e423ec01dd112548d07dcbdec1954f9df6ba09936c682d71de576f93] <==
	2026/01/10 08:53:51 Using namespace: kubernetes-dashboard
	2026/01/10 08:53:51 Using in-cluster config to connect to apiserver
	2026/01/10 08:53:51 Using secret token for csrf signing
	2026/01/10 08:53:51 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/10 08:53:51 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/10 08:53:51 Successful initial request to the apiserver, version: v1.28.0
	2026/01/10 08:53:51 Generating JWE encryption key
	2026/01/10 08:53:51 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/10 08:53:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/10 08:53:51 Initializing JWE encryption key from synchronized object
	2026/01/10 08:53:51 Creating in-cluster Sidecar client
	2026/01/10 08:53:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 08:53:51 Serving insecurely on HTTP port: 9090
	2026/01/10 08:54:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 08:53:51 Starting overwatch
	
	
	==> storage-provisioner [cf2021f237b7a23412332d927f1e3fc61448f19ce766d0d96d5a317c9855bb65] <==
	I0110 08:54:04.460297       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 08:54:04.469438       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 08:54:04.469494       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0110 08:54:21.866474       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 08:54:21.866651       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-093083_5095c604-fb31-4bb5-a7ad-f088104000b9!
	I0110 08:54:21.866626       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6d81b80b-b507-4754-870f-26841432edd7", APIVersion:"v1", ResourceVersion:"639", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-093083_5095c604-fb31-4bb5-a7ad-f088104000b9 became leader
	I0110 08:54:21.966846       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-093083_5095c604-fb31-4bb5-a7ad-f088104000b9!
	
	
	==> storage-provisioner [d4e39023b51206e0be48a97c34283a8e61c92fde7bfdd6b8d4de4724d840f8df] <==
	I0110 08:53:33.691009       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0110 08:54:03.695136       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-093083 -n old-k8s-version-093083
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-093083 -n old-k8s-version-093083: exit status 2 (339.474441ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-093083 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-093083
helpers_test.go:244: (dbg) docker inspect old-k8s-version-093083:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5a78f6c87c300c6e0dd5f3bbe31e9ea3aaf153e366cdb192469aeff0f98607cc",
	        "Created": "2026-01-10T08:52:09.133397359Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 313349,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T08:53:22.935205602Z",
	            "FinishedAt": "2026-01-10T08:53:22.00820023Z"
	        },
	        "Image": "sha256:e8ee619afcf8e9008c4fe3e7f2aba5fdbe7a9f0765053ca7dd53ab0df8ab02a5",
	        "ResolvConfPath": "/var/lib/docker/containers/5a78f6c87c300c6e0dd5f3bbe31e9ea3aaf153e366cdb192469aeff0f98607cc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5a78f6c87c300c6e0dd5f3bbe31e9ea3aaf153e366cdb192469aeff0f98607cc/hostname",
	        "HostsPath": "/var/lib/docker/containers/5a78f6c87c300c6e0dd5f3bbe31e9ea3aaf153e366cdb192469aeff0f98607cc/hosts",
	        "LogPath": "/var/lib/docker/containers/5a78f6c87c300c6e0dd5f3bbe31e9ea3aaf153e366cdb192469aeff0f98607cc/5a78f6c87c300c6e0dd5f3bbe31e9ea3aaf153e366cdb192469aeff0f98607cc-json.log",
	        "Name": "/old-k8s-version-093083",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-093083:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-093083",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5a78f6c87c300c6e0dd5f3bbe31e9ea3aaf153e366cdb192469aeff0f98607cc",
	                "LowerDir": "/var/lib/docker/overlay2/d070b5e56f95f0eb086a5bbe43eeabd880e14f061ffc4bc06dcbc47a66b72ad3-init/diff:/var/lib/docker/overlay2/97b14eb520192356c991915ce74cd6094e6ef948f398ac688b667ea18491ccde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d070b5e56f95f0eb086a5bbe43eeabd880e14f061ffc4bc06dcbc47a66b72ad3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d070b5e56f95f0eb086a5bbe43eeabd880e14f061ffc4bc06dcbc47a66b72ad3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d070b5e56f95f0eb086a5bbe43eeabd880e14f061ffc4bc06dcbc47a66b72ad3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-093083",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-093083/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-093083",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-093083",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-093083",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "fb9ba28ee210e086c1d182dde461a21b43a346a88d832ca6296c1445ef1fb399",
	            "SandboxKey": "/var/run/docker/netns/fb9ba28ee210",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-093083": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b8ccbd7d681c9cf4758976716607eccd2bce1e9581afb9f0c4894b2bbb7e4533",
	                    "EndpointID": "a047f574bbf3108209bd32fdb520f9899decca45a8535b473f54d9320f6f26ed",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "82:dd:97:63:ca:78",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-093083",
	                        "5a78f6c87c30"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-093083 -n old-k8s-version-093083
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-093083 -n old-k8s-version-093083: exit status 2 (349.148241ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-093083 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-093083 logs -n 25: (1.129522033s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p flannel-472660 sudo containerd config dump                                                                                                                                                                                                 │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                          │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo systemctl cat crio --no-pager                                                                                                                                                                                          │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo crio config                                                                                                                                                                                                            │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ delete  │ -p disable-driver-mounts-847921                                                                                                                                                                                                               │ disable-driver-mounts-847921 │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ start   │ -p default-k8s-diff-port-225354 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-225354 │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-093083 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-093083       │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-095312 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-095312            │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ stop    │ -p old-k8s-version-093083 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-093083       │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ stop    │ -p no-preload-095312 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-095312            │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-093083 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-093083       │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ start   │ -p old-k8s-version-093083 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-093083       │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:54 UTC │
	│ addons  │ enable dashboard -p no-preload-095312 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-095312            │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ start   │ -p no-preload-095312 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-095312            │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:54 UTC │
	│ addons  │ enable metrics-server -p embed-certs-072273 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-072273           │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ stop    │ -p embed-certs-072273 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-072273           │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ addons  │ enable dashboard -p embed-certs-072273 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-072273           │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ start   │ -p embed-certs-072273 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-072273           │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-225354 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-225354 │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-225354 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-225354 │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:54 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-225354 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-225354 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p default-k8s-diff-port-225354 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-225354 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ image   │ old-k8s-version-093083 image list --format=json                                                                                                                                                                                               │ old-k8s-version-093083       │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ pause   │ -p old-k8s-version-093083 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-093083       │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 08:54:10
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 08:54:10.770403  323767 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:54:10.770628  323767 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:54:10.770636  323767 out.go:374] Setting ErrFile to fd 2...
	I0110 08:54:10.770640  323767 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:54:10.770838  323767 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:54:10.771277  323767 out.go:368] Setting JSON to false
	I0110 08:54:10.772500  323767 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2203,"bootTime":1768033048,"procs":370,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 08:54:10.772550  323767 start.go:143] virtualization: kvm guest
	I0110 08:54:10.774548  323767 out.go:179] * [default-k8s-diff-port-225354] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 08:54:10.775930  323767 notify.go:221] Checking for updates...
	I0110 08:54:10.775987  323767 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 08:54:10.777327  323767 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 08:54:10.778691  323767 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:54:10.779869  323767 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-3641/.minikube
	I0110 08:54:10.780877  323767 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 08:54:10.782001  323767 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 08:54:10.783447  323767 config.go:182] Loaded profile config "default-k8s-diff-port-225354": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:54:10.784033  323767 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 08:54:10.808861  323767 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 08:54:10.808963  323767 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:54:10.865984  323767 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2026-01-10 08:54:10.855636003 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:54:10.866142  323767 docker.go:319] overlay module found
	I0110 08:54:10.867929  323767 out.go:179] * Using the docker driver based on existing profile
	I0110 08:54:10.869062  323767 start.go:309] selected driver: docker
	I0110 08:54:10.869077  323767 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-225354 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-225354 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:54:10.869184  323767 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 08:54:10.869926  323767 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:54:10.925502  323767 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2026-01-10 08:54:10.916316006 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:54:10.925807  323767 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 08:54:10.925849  323767 cni.go:84] Creating CNI manager for ""
	I0110 08:54:10.925905  323767 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 08:54:10.925939  323767 start.go:353] cluster config:
	{Name:default-k8s-diff-port-225354 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-225354 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:54:10.927799  323767 out.go:179] * Starting "default-k8s-diff-port-225354" primary control-plane node in "default-k8s-diff-port-225354" cluster
	I0110 08:54:10.928989  323767 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 08:54:10.930145  323767 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 08:54:10.931151  323767 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 08:54:10.931179  323767 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0110 08:54:10.931186  323767 cache.go:65] Caching tarball of preloaded images
	I0110 08:54:10.931185  323767 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 08:54:10.931262  323767 preload.go:251] Found /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0110 08:54:10.931274  323767 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 08:54:10.931366  323767 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/default-k8s-diff-port-225354/config.json ...
	I0110 08:54:10.952478  323767 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 08:54:10.952497  323767 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 08:54:10.952511  323767 cache.go:243] Successfully downloaded all kic artifacts
	I0110 08:54:10.952538  323767 start.go:360] acquireMachinesLock for default-k8s-diff-port-225354: {Name:mk6f4cf32f69b6a51f12f83adcd3cd0eb0ae8cbf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 08:54:10.952590  323767 start.go:364] duration metric: took 34.986µs to acquireMachinesLock for "default-k8s-diff-port-225354"
	I0110 08:54:10.952607  323767 start.go:96] Skipping create...Using existing machine configuration
	I0110 08:54:10.952614  323767 fix.go:54] fixHost starting: 
	I0110 08:54:10.952835  323767 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225354 --format={{.State.Status}}
	I0110 08:54:10.971677  323767 fix.go:112] recreateIfNeeded on default-k8s-diff-port-225354: state=Stopped err=<nil>
	W0110 08:54:10.971712  323767 fix.go:138] unexpected machine state, will restart: <nil>
	W0110 08:54:09.764911  319849 pod_ready.go:104] pod "coredns-7d764666f9-ss4nt" is not "Ready", error: <nil>
	W0110 08:54:12.264373  319849 pod_ready.go:104] pod "coredns-7d764666f9-ss4nt" is not "Ready", error: <nil>
	W0110 08:54:10.447913  313874 pod_ready.go:104] pod "coredns-7d764666f9-wpsnn" is not "Ready", error: <nil>
	I0110 08:54:12.447442  313874 pod_ready.go:94] pod "coredns-7d764666f9-wpsnn" is "Ready"
	I0110 08:54:12.447465  313874 pod_ready.go:86] duration metric: took 37.005475257s for pod "coredns-7d764666f9-wpsnn" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:54:12.450109  313874 pod_ready.go:83] waiting for pod "etcd-no-preload-095312" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:54:12.454228  313874 pod_ready.go:94] pod "etcd-no-preload-095312" is "Ready"
	I0110 08:54:12.454256  313874 pod_ready.go:86] duration metric: took 4.12175ms for pod "etcd-no-preload-095312" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:54:12.456424  313874 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-095312" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:54:12.460419  313874 pod_ready.go:94] pod "kube-apiserver-no-preload-095312" is "Ready"
	I0110 08:54:12.460442  313874 pod_ready.go:86] duration metric: took 3.995934ms for pod "kube-apiserver-no-preload-095312" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:54:12.462584  313874 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-095312" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:54:12.645718  313874 pod_ready.go:94] pod "kube-controller-manager-no-preload-095312" is "Ready"
	I0110 08:54:12.645758  313874 pod_ready.go:86] duration metric: took 183.153558ms for pod "kube-controller-manager-no-preload-095312" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:54:12.845858  313874 pod_ready.go:83] waiting for pod "kube-proxy-vrzf6" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:54:13.246243  313874 pod_ready.go:94] pod "kube-proxy-vrzf6" is "Ready"
	I0110 08:54:13.246269  313874 pod_ready.go:86] duration metric: took 400.386349ms for pod "kube-proxy-vrzf6" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:54:13.445337  313874 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-095312" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:54:13.845542  313874 pod_ready.go:94] pod "kube-scheduler-no-preload-095312" is "Ready"
	I0110 08:54:13.845566  313874 pod_ready.go:86] duration metric: took 400.206561ms for pod "kube-scheduler-no-preload-095312" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:54:13.845577  313874 pod_ready.go:40] duration metric: took 38.40686605s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 08:54:13.890931  313874 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0110 08:54:13.892708  313874 out.go:179] * Done! kubectl is now configured to use "no-preload-095312" cluster and "default" namespace by default
	I0110 08:54:10.973787  323767 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-225354" ...
	I0110 08:54:10.973853  323767 cli_runner.go:164] Run: docker start default-k8s-diff-port-225354
	I0110 08:54:11.238333  323767 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225354 --format={{.State.Status}}
	I0110 08:54:11.258016  323767 kic.go:430] container "default-k8s-diff-port-225354" state is running.
	I0110 08:54:11.258559  323767 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-225354
	I0110 08:54:11.280398  323767 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/default-k8s-diff-port-225354/config.json ...
	I0110 08:54:11.280702  323767 machine.go:94] provisionDockerMachine start ...
	I0110 08:54:11.280828  323767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225354
	I0110 08:54:11.301429  323767 main.go:144] libmachine: Using SSH client type: native
	I0110 08:54:11.301668  323767 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0110 08:54:11.301681  323767 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 08:54:11.302419  323767 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42400->127.0.0.1:33123: read: connection reset by peer
	I0110 08:54:14.431592  323767 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-225354
	
	I0110 08:54:14.431635  323767 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-225354"
	I0110 08:54:14.431702  323767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225354
	I0110 08:54:14.451318  323767 main.go:144] libmachine: Using SSH client type: native
	I0110 08:54:14.451515  323767 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0110 08:54:14.451527  323767 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-225354 && echo "default-k8s-diff-port-225354" | sudo tee /etc/hostname
	I0110 08:54:14.589004  323767 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-225354
	
	I0110 08:54:14.589083  323767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225354
	I0110 08:54:14.607514  323767 main.go:144] libmachine: Using SSH client type: native
	I0110 08:54:14.607721  323767 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0110 08:54:14.607763  323767 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-225354' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-225354/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-225354' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 08:54:14.737006  323767 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 08:54:14.737035  323767 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-3641/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-3641/.minikube}
	I0110 08:54:14.737067  323767 ubuntu.go:190] setting up certificates
	I0110 08:54:14.737089  323767 provision.go:84] configureAuth start
	I0110 08:54:14.737149  323767 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-225354
	I0110 08:54:14.756076  323767 provision.go:143] copyHostCerts
	I0110 08:54:14.756148  323767 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-3641/.minikube/ca.pem, removing ...
	I0110 08:54:14.756164  323767 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-3641/.minikube/ca.pem
	I0110 08:54:14.756236  323767 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-3641/.minikube/ca.pem (1078 bytes)
	I0110 08:54:14.756404  323767 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-3641/.minikube/cert.pem, removing ...
	I0110 08:54:14.756417  323767 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-3641/.minikube/cert.pem
	I0110 08:54:14.756450  323767 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-3641/.minikube/cert.pem (1123 bytes)
	I0110 08:54:14.756528  323767 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-3641/.minikube/key.pem, removing ...
	I0110 08:54:14.756537  323767 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-3641/.minikube/key.pem
	I0110 08:54:14.756563  323767 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-3641/.minikube/key.pem (1675 bytes)
	I0110 08:54:14.756647  323767 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-3641/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-225354 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-225354 localhost minikube]
	I0110 08:54:14.793509  323767 provision.go:177] copyRemoteCerts
	I0110 08:54:14.793560  323767 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 08:54:14.793595  323767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225354
	I0110 08:54:14.813116  323767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/default-k8s-diff-port-225354/id_rsa Username:docker}
	I0110 08:54:14.905947  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0110 08:54:14.924947  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 08:54:14.942427  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0110 08:54:14.959358  323767 provision.go:87] duration metric: took 222.24641ms to configureAuth
	I0110 08:54:14.959385  323767 ubuntu.go:206] setting minikube options for container-runtime
	I0110 08:54:14.959541  323767 config.go:182] Loaded profile config "default-k8s-diff-port-225354": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:54:14.959639  323767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225354
	I0110 08:54:14.978423  323767 main.go:144] libmachine: Using SSH client type: native
	I0110 08:54:14.978687  323767 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0110 08:54:14.978709  323767 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 08:54:15.292502  323767 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 08:54:15.292533  323767 machine.go:97] duration metric: took 4.011809959s to provisionDockerMachine
	I0110 08:54:15.292549  323767 start.go:293] postStartSetup for "default-k8s-diff-port-225354" (driver="docker")
	I0110 08:54:15.292564  323767 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 08:54:15.292642  323767 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 08:54:15.292693  323767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225354
	I0110 08:54:15.314158  323767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/default-k8s-diff-port-225354/id_rsa Username:docker}
	I0110 08:54:15.408580  323767 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 08:54:15.412461  323767 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 08:54:15.412484  323767 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 08:54:15.412494  323767 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-3641/.minikube/addons for local assets ...
	I0110 08:54:15.412543  323767 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-3641/.minikube/files for local assets ...
	I0110 08:54:15.412618  323767 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-3641/.minikube/files/etc/ssl/certs/71832.pem -> 71832.pem in /etc/ssl/certs
	I0110 08:54:15.412701  323767 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 08:54:15.420257  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/files/etc/ssl/certs/71832.pem --> /etc/ssl/certs/71832.pem (1708 bytes)
	I0110 08:54:15.437907  323767 start.go:296] duration metric: took 145.342731ms for postStartSetup
	I0110 08:54:15.437987  323767 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 08:54:15.438056  323767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225354
	I0110 08:54:15.456452  323767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/default-k8s-diff-port-225354/id_rsa Username:docker}
	I0110 08:54:15.547075  323767 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 08:54:15.551926  323767 fix.go:56] duration metric: took 4.599307206s for fixHost
	I0110 08:54:15.551952  323767 start.go:83] releasing machines lock for "default-k8s-diff-port-225354", held for 4.599352578s
	I0110 08:54:15.552009  323767 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-225354
	I0110 08:54:15.571390  323767 ssh_runner.go:195] Run: cat /version.json
	I0110 08:54:15.571479  323767 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 08:54:15.571492  323767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225354
	I0110 08:54:15.571536  323767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225354
	I0110 08:54:15.590047  323767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/default-k8s-diff-port-225354/id_rsa Username:docker}
	I0110 08:54:15.591127  323767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/default-k8s-diff-port-225354/id_rsa Username:docker}
	I0110 08:54:15.681002  323767 ssh_runner.go:195] Run: systemctl --version
	I0110 08:54:15.736158  323767 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 08:54:15.771411  323767 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 08:54:15.776401  323767 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 08:54:15.776474  323767 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 08:54:15.784643  323767 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 08:54:15.784665  323767 start.go:496] detecting cgroup driver to use...
	I0110 08:54:15.784700  323767 detect.go:178] detected "systemd" cgroup driver on host os
	I0110 08:54:15.784774  323767 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 08:54:15.799081  323767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 08:54:15.812276  323767 docker.go:218] disabling cri-docker service (if available) ...
	I0110 08:54:15.812336  323767 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 08:54:15.826890  323767 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 08:54:15.839388  323767 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 08:54:15.922811  323767 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 08:54:15.998942  323767 docker.go:234] disabling docker service ...
	I0110 08:54:15.999015  323767 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 08:54:16.014407  323767 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 08:54:16.026725  323767 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 08:54:16.107584  323767 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 08:54:16.187958  323767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 08:54:16.200970  323767 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 08:54:16.215874  323767 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 08:54:16.215939  323767 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:54:16.225363  323767 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0110 08:54:16.225421  323767 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:54:16.234046  323767 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:54:16.242715  323767 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:54:16.251754  323767 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 08:54:16.260507  323767 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:54:16.270006  323767 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:54:16.278297  323767 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:54:16.287021  323767 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 08:54:16.295062  323767 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 08:54:16.302531  323767 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:54:16.386036  323767 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 08:54:16.519040  323767 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 08:54:16.519096  323767 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 08:54:16.523210  323767 start.go:574] Will wait 60s for crictl version
	I0110 08:54:16.523262  323767 ssh_runner.go:195] Run: which crictl
	I0110 08:54:16.526960  323767 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 08:54:16.555412  323767 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 08:54:16.555483  323767 ssh_runner.go:195] Run: crio --version
	I0110 08:54:16.583901  323767 ssh_runner.go:195] Run: crio --version
	I0110 08:54:16.612570  323767 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 08:54:16.613832  323767 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-225354 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 08:54:16.631782  323767 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0110 08:54:16.636032  323767 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 08:54:16.646878  323767 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-225354 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-225354 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 08:54:16.646997  323767 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 08:54:16.647043  323767 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 08:54:16.681410  323767 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 08:54:16.681432  323767 crio.go:433] Images already preloaded, skipping extraction
	I0110 08:54:16.681488  323767 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 08:54:16.709542  323767 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 08:54:16.709564  323767 cache_images.go:86] Images are preloaded, skipping loading
	I0110 08:54:16.709578  323767 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.35.0 crio true true} ...
	I0110 08:54:16.709686  323767 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-225354 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-225354 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 08:54:16.709773  323767 ssh_runner.go:195] Run: crio config
	I0110 08:54:16.757583  323767 cni.go:84] Creating CNI manager for ""
	I0110 08:54:16.757609  323767 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 08:54:16.757627  323767 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 08:54:16.757647  323767 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-225354 NodeName:default-k8s-diff-port-225354 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 08:54:16.757801  323767 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-225354"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 08:54:16.757897  323767 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 08:54:16.767516  323767 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 08:54:16.767578  323767 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 08:54:16.775454  323767 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0110 08:54:16.788355  323767 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 08:54:16.801342  323767 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0110 08:54:16.814642  323767 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0110 08:54:16.819369  323767 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 08:54:16.829406  323767 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:54:16.909443  323767 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 08:54:16.933270  323767 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/default-k8s-diff-port-225354 for IP: 192.168.85.2
	I0110 08:54:16.933296  323767 certs.go:195] generating shared ca certs ...
	I0110 08:54:16.933320  323767 certs.go:227] acquiring lock for ca certs: {Name:mk00e261408d0e9fd9be39128613c5110a764de2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:54:16.933503  323767 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-3641/.minikube/ca.key
	I0110 08:54:16.933570  323767 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-3641/.minikube/proxy-client-ca.key
	I0110 08:54:16.933585  323767 certs.go:257] generating profile certs ...
	I0110 08:54:16.933711  323767 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/default-k8s-diff-port-225354/client.key
	I0110 08:54:16.933843  323767 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/default-k8s-diff-port-225354/apiserver.key.b2f93262
	I0110 08:54:16.933914  323767 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/default-k8s-diff-port-225354/proxy-client.key
	I0110 08:54:16.934071  323767 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/7183.pem (1338 bytes)
	W0110 08:54:16.934116  323767 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-3641/.minikube/certs/7183_empty.pem, impossibly tiny 0 bytes
	I0110 08:54:16.934130  323767 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca-key.pem (1675 bytes)
	I0110 08:54:16.934171  323767 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem (1078 bytes)
	I0110 08:54:16.934216  323767 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/cert.pem (1123 bytes)
	I0110 08:54:16.934253  323767 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/key.pem (1675 bytes)
	I0110 08:54:16.934322  323767 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/files/etc/ssl/certs/71832.pem (1708 bytes)
	I0110 08:54:16.935216  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 08:54:16.954242  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0110 08:54:16.973102  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 08:54:16.991857  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 08:54:17.016862  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/default-k8s-diff-port-225354/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0110 08:54:17.038329  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/default-k8s-diff-port-225354/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0110 08:54:17.058014  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/default-k8s-diff-port-225354/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 08:54:17.078592  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/default-k8s-diff-port-225354/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 08:54:17.097918  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/certs/7183.pem --> /usr/share/ca-certificates/7183.pem (1338 bytes)
	I0110 08:54:17.120708  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/files/etc/ssl/certs/71832.pem --> /usr/share/ca-certificates/71832.pem (1708 bytes)
	I0110 08:54:17.139003  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 08:54:17.156524  323767 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 08:54:17.168668  323767 ssh_runner.go:195] Run: openssl version
	I0110 08:54:17.175368  323767 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7183.pem
	I0110 08:54:17.182747  323767 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7183.pem /etc/ssl/certs/7183.pem
	I0110 08:54:17.190691  323767 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7183.pem
	I0110 08:54:17.194457  323767 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 08:23 /usr/share/ca-certificates/7183.pem
	I0110 08:54:17.194502  323767 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7183.pem
	I0110 08:54:17.228543  323767 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 08:54:17.236153  323767 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/71832.pem
	I0110 08:54:17.243355  323767 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/71832.pem /etc/ssl/certs/71832.pem
	I0110 08:54:17.250614  323767 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71832.pem
	I0110 08:54:17.254314  323767 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 08:23 /usr/share/ca-certificates/71832.pem
	I0110 08:54:17.254360  323767 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71832.pem
	I0110 08:54:17.291080  323767 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 08:54:17.299045  323767 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:54:17.306390  323767 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 08:54:17.314035  323767 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:54:17.317953  323767 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:54:17.318000  323767 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:54:17.355000  323767 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 08:54:17.362980  323767 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 08:54:17.367171  323767 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 08:54:17.402450  323767 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 08:54:17.439845  323767 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 08:54:17.488989  323767 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 08:54:17.547905  323767 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 08:54:17.597239  323767 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 08:54:17.641402  323767 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-225354 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-225354 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:54:17.641512  323767 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 08:54:17.641568  323767 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 08:54:17.671428  323767 cri.go:96] found id: "85fbcb73a888a911d321e3d1ed0152e1aa93447d76ca22015d3a09638892f2af"
	I0110 08:54:17.671452  323767 cri.go:96] found id: "6de83a52f42b4d00ef4463aa0a10635035e611d92fcb5f692497cd23e40d7676"
	I0110 08:54:17.671467  323767 cri.go:96] found id: "767f06c98be9d86d55d0cbaaa375406db22fd312258e490654cdcba950d47c27"
	I0110 08:54:17.671472  323767 cri.go:96] found id: "5055dfe1945b7e474350afd64ade8604c08027a381ce57320b00e445ef977a5c"
	I0110 08:54:17.671475  323767 cri.go:96] found id: ""
	I0110 08:54:17.671511  323767 ssh_runner.go:195] Run: sudo runc list -f json
	W0110 08:54:17.683716  323767 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:54:17Z" level=error msg="open /run/runc: no such file or directory"
	I0110 08:54:17.683818  323767 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 08:54:17.692500  323767 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 08:54:17.692519  323767 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 08:54:17.692563  323767 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 08:54:17.700875  323767 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 08:54:17.702210  323767 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-225354" does not appear in /home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:54:17.703105  323767 kubeconfig.go:62] /home/jenkins/minikube-integration/22427-3641/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-225354" cluster setting kubeconfig missing "default-k8s-diff-port-225354" context setting]
	I0110 08:54:17.704607  323767 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/kubeconfig: {Name:mk0d29b3b0ee1fd71729aff31f901be50a2d0664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:54:17.706397  323767 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 08:54:17.714287  323767 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I0110 08:54:17.714312  323767 kubeadm.go:602] duration metric: took 21.788411ms to restartPrimaryControlPlane
	I0110 08:54:17.714319  323767 kubeadm.go:403] duration metric: took 72.928609ms to StartCluster
	I0110 08:54:17.714335  323767 settings.go:142] acquiring lock: {Name:mkbb32fc6441ceb31ce2923ea8999f8375298f08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:54:17.714398  323767 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:54:17.715957  323767 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/kubeconfig: {Name:mk0d29b3b0ee1fd71729aff31f901be50a2d0664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:54:17.716233  323767 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 08:54:17.716303  323767 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 08:54:17.716385  323767 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-225354"
	I0110 08:54:17.716404  323767 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-225354"
	W0110 08:54:17.716410  323767 addons.go:248] addon storage-provisioner should already be in state true
	I0110 08:54:17.716433  323767 host.go:66] Checking if "default-k8s-diff-port-225354" exists ...
	I0110 08:54:17.716458  323767 config.go:182] Loaded profile config "default-k8s-diff-port-225354": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:54:17.716558  323767 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-225354"
	I0110 08:54:17.716606  323767 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-225354"
	I0110 08:54:17.716526  323767 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-225354"
	I0110 08:54:17.716694  323767 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-225354"
	W0110 08:54:17.716710  323767 addons.go:248] addon dashboard should already be in state true
	I0110 08:54:17.716747  323767 host.go:66] Checking if "default-k8s-diff-port-225354" exists ...
	I0110 08:54:17.716965  323767 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225354 --format={{.State.Status}}
	I0110 08:54:17.716965  323767 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225354 --format={{.State.Status}}
	I0110 08:54:17.717413  323767 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225354 --format={{.State.Status}}
	I0110 08:54:17.721904  323767 out.go:179] * Verifying Kubernetes components...
	I0110 08:54:17.723462  323767 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:54:17.745550  323767 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 08:54:17.745608  323767 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0110 08:54:17.746683  323767 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 08:54:17.746701  323767 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0110 08:54:17.746704  323767 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	W0110 08:54:14.265096  319849 pod_ready.go:104] pod "coredns-7d764666f9-ss4nt" is not "Ready", error: <nil>
	W0110 08:54:16.764367  319849 pod_ready.go:104] pod "coredns-7d764666f9-ss4nt" is not "Ready", error: <nil>
	I0110 08:54:17.746812  323767 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-225354"
	W0110 08:54:17.746828  323767 addons.go:248] addon default-storageclass should already be in state true
	I0110 08:54:17.746787  323767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225354
	I0110 08:54:17.746853  323767 host.go:66] Checking if "default-k8s-diff-port-225354" exists ...
	I0110 08:54:17.747311  323767 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225354 --format={{.State.Status}}
	I0110 08:54:17.747889  323767 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0110 08:54:17.747930  323767 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0110 08:54:17.747987  323767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225354
	I0110 08:54:17.783552  323767 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 08:54:17.783576  323767 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 08:54:17.783630  323767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225354
	I0110 08:54:17.783875  323767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/default-k8s-diff-port-225354/id_rsa Username:docker}
	I0110 08:54:17.785980  323767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/default-k8s-diff-port-225354/id_rsa Username:docker}
	I0110 08:54:17.810678  323767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/default-k8s-diff-port-225354/id_rsa Username:docker}
	I0110 08:54:17.872366  323767 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 08:54:17.887886  323767 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-225354" to be "Ready" ...
	I0110 08:54:17.898099  323767 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 08:54:17.903229  323767 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0110 08:54:17.903253  323767 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0110 08:54:17.917337  323767 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0110 08:54:17.917360  323767 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0110 08:54:17.921609  323767 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 08:54:17.933302  323767 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0110 08:54:17.933326  323767 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0110 08:54:17.947626  323767 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0110 08:54:17.947646  323767 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0110 08:54:17.960408  323767 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0110 08:54:17.960475  323767 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0110 08:54:17.974266  323767 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0110 08:54:17.974295  323767 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0110 08:54:17.986776  323767 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0110 08:54:17.986799  323767 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0110 08:54:17.999337  323767 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0110 08:54:17.999358  323767 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0110 08:54:18.011953  323767 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 08:54:18.011978  323767 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0110 08:54:18.024936  323767 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 08:54:19.541880  323767 node_ready.go:49] node "default-k8s-diff-port-225354" is "Ready"
	I0110 08:54:19.541922  323767 node_ready.go:38] duration metric: took 1.653997821s for node "default-k8s-diff-port-225354" to be "Ready" ...
	I0110 08:54:19.541939  323767 api_server.go:52] waiting for apiserver process to appear ...
	I0110 08:54:19.541994  323767 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 08:54:20.082614  323767 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.184477569s)
	I0110 08:54:20.082684  323767 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.161045842s)
	I0110 08:54:20.082816  323767 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.057848718s)
	I0110 08:54:20.082870  323767 api_server.go:72] duration metric: took 2.366605517s to wait for apiserver process to appear ...
	I0110 08:54:20.082941  323767 api_server.go:88] waiting for apiserver healthz status ...
	I0110 08:54:20.082962  323767 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0110 08:54:20.084235  323767 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-225354 addons enable metrics-server
	
	I0110 08:54:20.087836  323767 api_server.go:325] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 08:54:20.087861  323767 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0110 08:54:20.091293  323767 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I0110 08:54:20.092886  323767 addons.go:530] duration metric: took 2.376597654s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I0110 08:54:20.583799  323767 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0110 08:54:20.588631  323767 api_server.go:325] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 08:54:20.588668  323767 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	
	
	==> CRI-O <==
	Jan 10 08:53:51 old-k8s-version-093083 crio[572]: time="2026-01-10T08:53:51.372233058Z" level=info msg="Created container e605b843e423ec01dd112548d07dcbdec1954f9df6ba09936c682d71de576f93: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-dtt5w/kubernetes-dashboard" id=07903371-8da8-495a-afd7-b8ab70043b06 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:53:51 old-k8s-version-093083 crio[572]: time="2026-01-10T08:53:51.372765099Z" level=info msg="Starting container: e605b843e423ec01dd112548d07dcbdec1954f9df6ba09936c682d71de576f93" id=51801ec2-cefd-495a-9fe2-ded1f7ca23f7 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 08:53:51 old-k8s-version-093083 crio[572]: time="2026-01-10T08:53:51.374609171Z" level=info msg="Started container" PID=1764 containerID=e605b843e423ec01dd112548d07dcbdec1954f9df6ba09936c682d71de576f93 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-dtt5w/kubernetes-dashboard id=51801ec2-cefd-495a-9fe2-ded1f7ca23f7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c11c917910354b25b044942572de015401cf3513251c5b987890e899751ca5d4
	Jan 10 08:54:04 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:04.3942689Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a7917da4-d483-41bc-b23e-c41b1ccfc4a6 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:54:04 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:04.395253651Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c01dee5a-c62c-400e-afd9-2039ea7da28f name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:54:04 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:04.396439658Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=e013fcf0-f850-4bdf-9693-8e9639a18bc4 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:54:04 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:04.396579566Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:04 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:04.401280443Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:04 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:04.401469683Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/92e0e38d53ff4c048f7a1f153db9ca954d5ac940056dd1e118d05feca6ee27f1/merged/etc/passwd: no such file or directory"
	Jan 10 08:54:04 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:04.401506885Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/92e0e38d53ff4c048f7a1f153db9ca954d5ac940056dd1e118d05feca6ee27f1/merged/etc/group: no such file or directory"
	Jan 10 08:54:04 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:04.401827098Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:04 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:04.441579603Z" level=info msg="Created container cf2021f237b7a23412332d927f1e3fc61448f19ce766d0d96d5a317c9855bb65: kube-system/storage-provisioner/storage-provisioner" id=e013fcf0-f850-4bdf-9693-8e9639a18bc4 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:54:04 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:04.442203505Z" level=info msg="Starting container: cf2021f237b7a23412332d927f1e3fc61448f19ce766d0d96d5a317c9855bb65" id=f58c7891-3745-4c82-a4e7-7829a174e14c name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 08:54:04 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:04.444451377Z" level=info msg="Started container" PID=1787 containerID=cf2021f237b7a23412332d927f1e3fc61448f19ce766d0d96d5a317c9855bb65 description=kube-system/storage-provisioner/storage-provisioner id=f58c7891-3745-4c82-a4e7-7829a174e14c name=/runtime.v1.RuntimeService/StartContainer sandboxID=7804ad23ae100420b531129f18a5a98e25b1d068c481ea99a9d4d0a1688346b6
	Jan 10 08:54:09 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:09.280966335Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=84e35603-001d-4999-baf2-8d12e7917f78 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:54:09 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:09.282021851Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9b3b19d8-3690-4771-814c-c59d1f2e9fa5 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:54:09 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:09.282916237Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h2jqh/dashboard-metrics-scraper" id=092e25e2-d699-4136-bae6-54833863b7de name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:54:09 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:09.283052294Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:09 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:09.288824393Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:09 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:09.289356686Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:09 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:09.31819171Z" level=info msg="Created container 1aa38e065133620305b9ea5ba3cc57e2dd22a6a1fcb024cc2b81ed4b8495d94a: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h2jqh/dashboard-metrics-scraper" id=092e25e2-d699-4136-bae6-54833863b7de name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:54:09 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:09.318806394Z" level=info msg="Starting container: 1aa38e065133620305b9ea5ba3cc57e2dd22a6a1fcb024cc2b81ed4b8495d94a" id=78002839-7915-4ad3-bfcd-9e0a3016379b name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 08:54:09 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:09.321022927Z" level=info msg="Started container" PID=1825 containerID=1aa38e065133620305b9ea5ba3cc57e2dd22a6a1fcb024cc2b81ed4b8495d94a description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h2jqh/dashboard-metrics-scraper id=78002839-7915-4ad3-bfcd-9e0a3016379b name=/runtime.v1.RuntimeService/StartContainer sandboxID=8b541984d6d40c1cd5c80f754ca07e560ec8853186a1581abb8783eded58abb2
	Jan 10 08:54:09 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:09.411011114Z" level=info msg="Removing container: dabc28c95171c3213864a92f325ebb091da4392b8a2afb9551c9f3cbfe48d2d1" id=0e77320e-867a-4f16-acc8-2bf02579fe54 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 08:54:09 old-k8s-version-093083 crio[572]: time="2026-01-10T08:54:09.419632285Z" level=info msg="Removed container dabc28c95171c3213864a92f325ebb091da4392b8a2afb9551c9f3cbfe48d2d1: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h2jqh/dashboard-metrics-scraper" id=0e77320e-867a-4f16-acc8-2bf02579fe54 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	1aa38e0651336       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           14 seconds ago      Exited              dashboard-metrics-scraper   2                   8b541984d6d40       dashboard-metrics-scraper-5f989dc9cf-h2jqh       kubernetes-dashboard
	cf2021f237b7a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   7804ad23ae100       storage-provisioner                              kube-system
	e605b843e423e       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   32 seconds ago      Running             kubernetes-dashboard        0                   c11c917910354       kubernetes-dashboard-8694d4445c-dtt5w            kubernetes-dashboard
	baf39ba3661d5       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           49 seconds ago      Running             coredns                     0                   466b958d31913       coredns-5dd5756b68-sscts                         kube-system
	d7b14d87548cc       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           49 seconds ago      Running             busybox                     1                   28bf08291f841       busybox                                          default
	be67c5d068c39       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           49 seconds ago      Running             kindnet-cni                 0                   a5effcbbd7e5a       kindnet-nn64b                                    kube-system
	b731c34ce0dc7       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           49 seconds ago      Running             kube-proxy                  0                   c15fea7f8fed5       kube-proxy-r7qzb                                 kube-system
	d4e39023b5120       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Exited              storage-provisioner         0                   7804ad23ae100       storage-provisioner                              kube-system
	dd24142d01693       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           53 seconds ago      Running             kube-scheduler              0                   05d2877df5917       kube-scheduler-old-k8s-version-093083            kube-system
	77835742c9e4e       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           53 seconds ago      Running             kube-controller-manager     0                   69753a0f1a861       kube-controller-manager-old-k8s-version-093083   kube-system
	45c05fec75b01       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           53 seconds ago      Running             etcd                        0                   f51c02cd2bc04       etcd-old-k8s-version-093083                      kube-system
	a34eedbb84c37       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           53 seconds ago      Running             kube-apiserver              0                   41e628aecc10e       kube-apiserver-old-k8s-version-093083            kube-system
	
	
	==> coredns [baf39ba3661d5e4554402c182c54915a779e9c2589d4896b8c74b818186d2e2e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:49774 - 4918 "HINFO IN 4927896878841463775.316847351234262986. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.015460235s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-093083
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-093083
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee
	                    minikube.k8s.io/name=old-k8s-version-093083
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T08_52_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 08:52:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-093083
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 08:54:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 08:54:03 +0000   Sat, 10 Jan 2026 08:52:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 08:54:03 +0000   Sat, 10 Jan 2026 08:52:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 08:54:03 +0000   Sat, 10 Jan 2026 08:52:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 08:54:03 +0000   Sat, 10 Jan 2026 08:52:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-093083
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 73d0d7ed7bc783a051b563196960b035
	  System UUID:                c7a82a71-54f6-4520-9c6e-142f796b8561
	  Boot ID:                    7c12f492-59b6-440b-986c-741ece916a23
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-5dd5756b68-sscts                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-old-k8s-version-093083                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         118s
	  kube-system                 kindnet-nn64b                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-old-k8s-version-093083             250m (3%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-old-k8s-version-093083    200m (2%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-r7qzb                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-old-k8s-version-093083             100m (1%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-h2jqh        0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-dtt5w             0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 105s               kube-proxy       
	  Normal  Starting                 49s                kube-proxy       
	  Normal  NodeHasSufficientMemory  118s               kubelet          Node old-k8s-version-093083 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s               kubelet          Node old-k8s-version-093083 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s               kubelet          Node old-k8s-version-093083 status is now: NodeHasSufficientPID
	  Normal  Starting                 118s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           106s               node-controller  Node old-k8s-version-093083 event: Registered Node old-k8s-version-093083 in Controller
	  Normal  NodeReady                92s                kubelet          Node old-k8s-version-093083 status is now: NodeReady
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)  kubelet          Node old-k8s-version-093083 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)  kubelet          Node old-k8s-version-093083 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)  kubelet          Node old-k8s-version-093083 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           38s                node-controller  Node old-k8s-version-093083 event: Registered Node old-k8s-version-093083 in Controller
	
	
	==> dmesg <==
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 4a 03 46 88 66 4e 08 06
	[  +6.406566] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7a 4f c9 a1 45 f4 08 06
	[  +0.001429] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca 5b 5b a9 7c eb 08 06
	[ +24.002020] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7a 6f 49 7e 9d d1 08 06
	[  +0.000366] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e 3a ac 18 98 c9 08 06
	[Jan10 08:52] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4a 05 5c 86 cf df 08 06
	[  +0.000337] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 7a 4f c9 a1 45 f4 08 06
	[  +0.000602] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca 5b 5b a9 7c eb 08 06
	[  +0.739059] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e 32 81 dc 0a 7c 08 06
	[  +1.988700] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 6e ec e4 aa 3d 08 06
	[ +20.040962] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 ac f8 cf fb 84 08 06
	[  +0.000375] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 6e ec e4 aa 3d 08 06
	[  +0.000915] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 32 81 dc 0a 7c 08 06
	
	
	==> etcd [45c05fec75b0148f5c10bc223ec4f3c0de54145a816e2131a615a16966edecc9] <==
	{"level":"info","ts":"2026-01-10T08:53:29.865903Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T08:53:29.86594Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T08:53:29.865773Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2026-01-10T08:53:29.866256Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2026-01-10T08:53:29.866574Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-10T08:53:29.866759Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2026-01-10T08:53:29.868358Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2026-01-10T08:53:29.868626Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-10T08:53:29.868692Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-10T08:53:29.868496Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2026-01-10T08:53:29.869239Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2026-01-10T08:53:31.353688Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T08:53:31.353763Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T08:53:31.353788Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2026-01-10T08:53:31.353807Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T08:53:31.353816Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2026-01-10T08:53:31.353828Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2026-01-10T08:53:31.353838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2026-01-10T08:53:31.354813Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-093083 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T08:53:31.354845Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T08:53:31.354846Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T08:53:31.355053Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T08:53:31.355083Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T08:53:31.356169Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2026-01-10T08:53:31.356427Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 08:54:23 up 36 min,  0 user,  load average: 4.19, 3.97, 2.61
	Linux old-k8s-version-093083 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [be67c5d068c39c6b842d3d6eabf77430720ba8af75b51a3ea51b3a1d05abe021] <==
	I0110 08:53:33.912869       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 08:53:33.913323       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I0110 08:53:33.913518       1 main.go:148] setting mtu 1500 for CNI 
	I0110 08:53:33.913542       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 08:53:33.913564       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T08:53:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 08:53:34.116325       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 08:53:34.116357       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 08:53:34.116370       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 08:53:34.208269       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 08:53:34.507530       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 08:53:34.507561       1 metrics.go:72] Registering metrics
	I0110 08:53:34.507870       1 controller.go:711] "Syncing nftables rules"
	I0110 08:53:44.116110       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0110 08:53:44.116192       1 main.go:301] handling current node
	I0110 08:53:54.115876       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0110 08:53:54.115930       1 main.go:301] handling current node
	I0110 08:54:04.115903       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0110 08:54:04.116064       1 main.go:301] handling current node
	I0110 08:54:14.115428       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I0110 08:54:14.115461       1 main.go:301] handling current node
	
	
	==> kube-apiserver [a34eedbb84c37c48a1a753a25087a48c8b7295f35503fe8d9738f819582226fa] <==
	I0110 08:53:32.570582       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0110 08:53:32.571792       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0110 08:53:32.571865       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0110 08:53:32.571934       1 aggregator.go:166] initial CRD sync complete...
	I0110 08:53:32.571954       1 autoregister_controller.go:141] Starting autoregister controller
	I0110 08:53:32.571958       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0110 08:53:32.571962       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0110 08:53:32.571974       1 cache.go:39] Caches are synced for autoregister controller
	I0110 08:53:32.572033       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0110 08:53:32.572033       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0110 08:53:32.572197       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0110 08:53:32.574228       1 shared_informer.go:318] Caches are synced for configmaps
	E0110 08:53:32.577876       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0110 08:53:33.476196       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0110 08:53:33.500519       1 controller.go:624] quota admission added evaluator for: namespaces
	I0110 08:53:33.531703       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0110 08:53:33.549066       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 08:53:33.556708       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 08:53:33.566794       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0110 08:53:33.638231       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.187.253"}
	I0110 08:53:33.657164       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.97.221.136"}
	I0110 08:53:44.985637       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 08:53:45.006839       1 controller.go:624] quota admission added evaluator for: endpoints
	I0110 08:53:45.031110       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0110 08:53:45.031111       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [77835742c9e4e9169a8997b0af913ac31071a741ce055172f48d6e40e8bb0dfa] <==
	I0110 08:53:45.053242       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="17.097663ms"
	I0110 08:53:45.053327       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="17.21119ms"
	I0110 08:53:45.062614       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="9.231422ms"
	I0110 08:53:45.062693       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="43.574µs"
	I0110 08:53:45.064663       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="11.315156ms"
	I0110 08:53:45.064824       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="42.67µs"
	I0110 08:53:45.073799       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="81.953µs"
	I0110 08:53:45.083755       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="94.314µs"
	I0110 08:53:45.109017       1 shared_informer.go:318] Caches are synced for PV protection
	I0110 08:53:45.110145       1 shared_informer.go:318] Caches are synced for persistent volume
	I0110 08:53:45.112397       1 shared_informer.go:318] Caches are synced for attach detach
	I0110 08:53:45.157239       1 shared_informer.go:318] Caches are synced for resource quota
	I0110 08:53:45.237365       1 shared_informer.go:318] Caches are synced for resource quota
	I0110 08:53:45.555380       1 shared_informer.go:318] Caches are synced for garbage collector
	I0110 08:53:45.588021       1 shared_informer.go:318] Caches are synced for garbage collector
	I0110 08:53:45.588071       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0110 08:53:48.364501       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="62.444µs"
	I0110 08:53:49.371586       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="101.712µs"
	I0110 08:53:50.372897       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.469µs"
	I0110 08:53:52.385548       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="5.93216ms"
	I0110 08:53:52.385654       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="67.557µs"
	I0110 08:54:05.397996       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.984109ms"
	I0110 08:54:05.398132       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="87.359µs"
	I0110 08:54:09.421188       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="116.742µs"
	I0110 08:54:15.371810       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="99.206µs"
	
	
	==> kube-proxy [b731c34ce0dc7be8cef547822c5515bcc237062ed93d07c59c1bf099151ddcd5] <==
	I0110 08:53:33.743911       1 server_others.go:69] "Using iptables proxy"
	I0110 08:53:33.763183       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I0110 08:53:33.791309       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 08:53:33.793748       1 server_others.go:152] "Using iptables Proxier"
	I0110 08:53:33.793787       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0110 08:53:33.793798       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0110 08:53:33.793840       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0110 08:53:33.794140       1 server.go:846] "Version info" version="v1.28.0"
	I0110 08:53:33.794157       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 08:53:33.794852       1 config.go:188] "Starting service config controller"
	I0110 08:53:33.794930       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0110 08:53:33.795038       1 config.go:315] "Starting node config controller"
	I0110 08:53:33.795056       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0110 08:53:33.794883       1 config.go:97] "Starting endpoint slice config controller"
	I0110 08:53:33.795113       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0110 08:53:33.895864       1 shared_informer.go:318] Caches are synced for node config
	I0110 08:53:33.895913       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0110 08:53:33.895933       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [dd24142d016939bba737e4aa2e124d9cca83e550da432b869538feff1f575331] <==
	I0110 08:53:30.655629       1 serving.go:348] Generated self-signed cert in-memory
	I0110 08:53:32.535295       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I0110 08:53:32.535611       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 08:53:32.539212       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0110 08:53:32.539232       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 08:53:32.539243       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0110 08:53:32.539251       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0110 08:53:32.539286       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0110 08:53:32.539318       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0110 08:53:32.540320       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0110 08:53:32.540397       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0110 08:53:32.640396       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0110 08:53:32.640426       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0110 08:53:32.640407       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	
	
	==> kubelet <==
	Jan 10 08:53:45 old-k8s-version-093083 kubelet[731]: I0110 08:53:45.176132     731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh4pt\" (UniqueName: \"kubernetes.io/projected/47cbad00-43d9-47bc-93f9-87616b11c240-kube-api-access-rh4pt\") pod \"dashboard-metrics-scraper-5f989dc9cf-h2jqh\" (UID: \"47cbad00-43d9-47bc-93f9-87616b11c240\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h2jqh"
	Jan 10 08:53:45 old-k8s-version-093083 kubelet[731]: I0110 08:53:45.176178     731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/8a543484-64c8-459a-9754-8b99619ce408-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-dtt5w\" (UID: \"8a543484-64c8-459a-9754-8b99619ce408\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-dtt5w"
	Jan 10 08:53:45 old-k8s-version-093083 kubelet[731]: I0110 08:53:45.176204     731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/47cbad00-43d9-47bc-93f9-87616b11c240-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-h2jqh\" (UID: \"47cbad00-43d9-47bc-93f9-87616b11c240\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h2jqh"
	Jan 10 08:53:45 old-k8s-version-093083 kubelet[731]: I0110 08:53:45.176233     731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fddjv\" (UniqueName: \"kubernetes.io/projected/8a543484-64c8-459a-9754-8b99619ce408-kube-api-access-fddjv\") pod \"kubernetes-dashboard-8694d4445c-dtt5w\" (UID: \"8a543484-64c8-459a-9754-8b99619ce408\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-dtt5w"
	Jan 10 08:53:48 old-k8s-version-093083 kubelet[731]: I0110 08:53:48.350615     731 scope.go:117] "RemoveContainer" containerID="607c9ddefff94db09f45a0f2c76d05e224b1ea85c72a367070e93aa147971ba4"
	Jan 10 08:53:49 old-k8s-version-093083 kubelet[731]: I0110 08:53:49.355238     731 scope.go:117] "RemoveContainer" containerID="607c9ddefff94db09f45a0f2c76d05e224b1ea85c72a367070e93aa147971ba4"
	Jan 10 08:53:49 old-k8s-version-093083 kubelet[731]: I0110 08:53:49.355529     731 scope.go:117] "RemoveContainer" containerID="dabc28c95171c3213864a92f325ebb091da4392b8a2afb9551c9f3cbfe48d2d1"
	Jan 10 08:53:49 old-k8s-version-093083 kubelet[731]: E0110 08:53:49.355961     731 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-h2jqh_kubernetes-dashboard(47cbad00-43d9-47bc-93f9-87616b11c240)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h2jqh" podUID="47cbad00-43d9-47bc-93f9-87616b11c240"
	Jan 10 08:53:50 old-k8s-version-093083 kubelet[731]: I0110 08:53:50.359185     731 scope.go:117] "RemoveContainer" containerID="dabc28c95171c3213864a92f325ebb091da4392b8a2afb9551c9f3cbfe48d2d1"
	Jan 10 08:53:50 old-k8s-version-093083 kubelet[731]: E0110 08:53:50.359635     731 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-h2jqh_kubernetes-dashboard(47cbad00-43d9-47bc-93f9-87616b11c240)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h2jqh" podUID="47cbad00-43d9-47bc-93f9-87616b11c240"
	Jan 10 08:53:52 old-k8s-version-093083 kubelet[731]: I0110 08:53:52.379560     731 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-dtt5w" podStartSLOduration=1.433085441 podCreationTimestamp="2026-01-10 08:53:45 +0000 UTC" firstStartedPulling="2026-01-10 08:53:45.392919221 +0000 UTC m=+16.220335646" lastFinishedPulling="2026-01-10 08:53:51.339314001 +0000 UTC m=+22.166730438" observedRunningTime="2026-01-10 08:53:52.379107433 +0000 UTC m=+23.206523875" watchObservedRunningTime="2026-01-10 08:53:52.379480233 +0000 UTC m=+23.206896675"
	Jan 10 08:53:55 old-k8s-version-093083 kubelet[731]: I0110 08:53:55.362157     731 scope.go:117] "RemoveContainer" containerID="dabc28c95171c3213864a92f325ebb091da4392b8a2afb9551c9f3cbfe48d2d1"
	Jan 10 08:53:55 old-k8s-version-093083 kubelet[731]: E0110 08:53:55.362463     731 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-h2jqh_kubernetes-dashboard(47cbad00-43d9-47bc-93f9-87616b11c240)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h2jqh" podUID="47cbad00-43d9-47bc-93f9-87616b11c240"
	Jan 10 08:54:04 old-k8s-version-093083 kubelet[731]: I0110 08:54:04.393687     731 scope.go:117] "RemoveContainer" containerID="d4e39023b51206e0be48a97c34283a8e61c92fde7bfdd6b8d4de4724d840f8df"
	Jan 10 08:54:09 old-k8s-version-093083 kubelet[731]: I0110 08:54:09.280306     731 scope.go:117] "RemoveContainer" containerID="dabc28c95171c3213864a92f325ebb091da4392b8a2afb9551c9f3cbfe48d2d1"
	Jan 10 08:54:09 old-k8s-version-093083 kubelet[731]: I0110 08:54:09.409761     731 scope.go:117] "RemoveContainer" containerID="dabc28c95171c3213864a92f325ebb091da4392b8a2afb9551c9f3cbfe48d2d1"
	Jan 10 08:54:09 old-k8s-version-093083 kubelet[731]: I0110 08:54:09.410009     731 scope.go:117] "RemoveContainer" containerID="1aa38e065133620305b9ea5ba3cc57e2dd22a6a1fcb024cc2b81ed4b8495d94a"
	Jan 10 08:54:09 old-k8s-version-093083 kubelet[731]: E0110 08:54:09.410352     731 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-h2jqh_kubernetes-dashboard(47cbad00-43d9-47bc-93f9-87616b11c240)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h2jqh" podUID="47cbad00-43d9-47bc-93f9-87616b11c240"
	Jan 10 08:54:15 old-k8s-version-093083 kubelet[731]: I0110 08:54:15.361875     731 scope.go:117] "RemoveContainer" containerID="1aa38e065133620305b9ea5ba3cc57e2dd22a6a1fcb024cc2b81ed4b8495d94a"
	Jan 10 08:54:15 old-k8s-version-093083 kubelet[731]: E0110 08:54:15.362290     731 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-h2jqh_kubernetes-dashboard(47cbad00-43d9-47bc-93f9-87616b11c240)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-h2jqh" podUID="47cbad00-43d9-47bc-93f9-87616b11c240"
	Jan 10 08:54:19 old-k8s-version-093083 kubelet[731]: I0110 08:54:19.326154     731 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Jan 10 08:54:19 old-k8s-version-093083 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 08:54:19 old-k8s-version-093083 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 08:54:19 old-k8s-version-093083 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 08:54:19 old-k8s-version-093083 systemd[1]: kubelet.service: Consumed 1.493s CPU time.
	
	
	==> kubernetes-dashboard [e605b843e423ec01dd112548d07dcbdec1954f9df6ba09936c682d71de576f93] <==
	2026/01/10 08:53:51 Using namespace: kubernetes-dashboard
	2026/01/10 08:53:51 Using in-cluster config to connect to apiserver
	2026/01/10 08:53:51 Using secret token for csrf signing
	2026/01/10 08:53:51 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/10 08:53:51 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/10 08:53:51 Successful initial request to the apiserver, version: v1.28.0
	2026/01/10 08:53:51 Generating JWE encryption key
	2026/01/10 08:53:51 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/10 08:53:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/10 08:53:51 Initializing JWE encryption key from synchronized object
	2026/01/10 08:53:51 Creating in-cluster Sidecar client
	2026/01/10 08:53:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 08:53:51 Serving insecurely on HTTP port: 9090
	2026/01/10 08:54:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 08:53:51 Starting overwatch
	
	
	==> storage-provisioner [cf2021f237b7a23412332d927f1e3fc61448f19ce766d0d96d5a317c9855bb65] <==
	I0110 08:54:04.460297       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 08:54:04.469438       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 08:54:04.469494       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0110 08:54:21.866474       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 08:54:21.866651       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-093083_5095c604-fb31-4bb5-a7ad-f088104000b9!
	I0110 08:54:21.866626       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6d81b80b-b507-4754-870f-26841432edd7", APIVersion:"v1", ResourceVersion:"639", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-093083_5095c604-fb31-4bb5-a7ad-f088104000b9 became leader
	I0110 08:54:21.966846       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-093083_5095c604-fb31-4bb5-a7ad-f088104000b9!
	
	
	==> storage-provisioner [d4e39023b51206e0be48a97c34283a8e61c92fde7bfdd6b8d4de4724d840f8df] <==
	I0110 08:53:33.691009       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0110 08:54:03.695136       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-093083 -n old-k8s-version-093083
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-093083 -n old-k8s-version-093083: exit status 2 (410.105788ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-093083 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (5.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (7.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-095312 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-095312 --alsologtostderr -v=1: exit status 80 (2.616875582s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-095312 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:54:25.693095  327813 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:54:25.693446  327813 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:54:25.693457  327813 out.go:374] Setting ErrFile to fd 2...
	I0110 08:54:25.693463  327813 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:54:25.693766  327813 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:54:25.694094  327813 out.go:368] Setting JSON to false
	I0110 08:54:25.694118  327813 mustload.go:66] Loading cluster: no-preload-095312
	I0110 08:54:25.694555  327813 config.go:182] Loaded profile config "no-preload-095312": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:54:25.695136  327813 cli_runner.go:164] Run: docker container inspect no-preload-095312 --format={{.State.Status}}
	I0110 08:54:25.719410  327813 host.go:66] Checking if "no-preload-095312" exists ...
	I0110 08:54:25.719679  327813 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:54:25.789788  327813 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:85 SystemTime:2026-01-10 08:54:25.777443659 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:54:25.790569  327813 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22376/minikube-v1.37.0-1767438792-22376-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1767438792-22376/minikube-v1.37.0-1767438792-22376-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1767438792-22376-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:no-preload-095312 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool
=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0110 08:54:25.793068  327813 out.go:179] * Pausing node no-preload-095312 ... 
	I0110 08:54:25.794370  327813 host.go:66] Checking if "no-preload-095312" exists ...
	I0110 08:54:25.794674  327813 ssh_runner.go:195] Run: systemctl --version
	I0110 08:54:25.794729  327813 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-095312
	I0110 08:54:25.817530  327813 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/no-preload-095312/id_rsa Username:docker}
	I0110 08:54:25.922828  327813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:54:25.939402  327813 pause.go:52] kubelet running: true
	I0110 08:54:25.939502  327813 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 08:54:26.184038  327813 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 08:54:26.184133  327813 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 08:54:26.277611  327813 cri.go:96] found id: "a3296b381169a6ddfab7afc7323a600112a626b64f20e93c01fd3bef1f408360"
	I0110 08:54:26.277639  327813 cri.go:96] found id: "3518fca86e68415819fef31a883c95cdfc0166747df2ea66ad4b89d8b5add329"
	I0110 08:54:26.277646  327813 cri.go:96] found id: "6642492d74528ed7ea62a82a2d1e91979d058b03e408a23c08e5341c8aef7bcf"
	I0110 08:54:26.277651  327813 cri.go:96] found id: "8cc27bb53ccf039cf66d610eccb576f49bdbb73941b54ab4f3ae15da8ca459c9"
	I0110 08:54:26.277655  327813 cri.go:96] found id: "7edc134caaddf41c018c315644cbf965110ff668918e57324178f7efbba5809b"
	I0110 08:54:26.277660  327813 cri.go:96] found id: "4114f852d4bfbe76e80bef4884aabfe15cc867ff51c6109af0772dba003fc92e"
	I0110 08:54:26.277664  327813 cri.go:96] found id: "168230ea09edf78f7dbfce3346ce34e1aecc8ef7d88bf3480f0d898e0a09de74"
	I0110 08:54:26.277668  327813 cri.go:96] found id: "360adaeb3e778e316558a2aa06913d99ad52856234134b1d2a1f72db5b201faa"
	I0110 08:54:26.277673  327813 cri.go:96] found id: "7b204b358eeadb60a0ffc4d238b32e1f0914014ff649c11231353d088fbfd63e"
	I0110 08:54:26.277681  327813 cri.go:96] found id: "0bb140e695b05dc0360d1bfe69740a554acd68ace81fd737c1eef5fb1cd3c050"
	I0110 08:54:26.277686  327813 cri.go:96] found id: "f5a7c1082094a1bd85fd4d5d1b52374cfa4eb365f89b559cac9994c89013f73c"
	I0110 08:54:26.277690  327813 cri.go:96] found id: ""
	I0110 08:54:26.277763  327813 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 08:54:26.295542  327813 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:54:26Z" level=error msg="open /run/runc: no such file or directory"
	I0110 08:54:26.640855  327813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:54:26.661650  327813 pause.go:52] kubelet running: false
	I0110 08:54:26.661804  327813 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 08:54:26.817585  327813 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 08:54:26.817662  327813 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 08:54:26.886898  327813 cri.go:96] found id: "a3296b381169a6ddfab7afc7323a600112a626b64f20e93c01fd3bef1f408360"
	I0110 08:54:26.886926  327813 cri.go:96] found id: "3518fca86e68415819fef31a883c95cdfc0166747df2ea66ad4b89d8b5add329"
	I0110 08:54:26.886934  327813 cri.go:96] found id: "6642492d74528ed7ea62a82a2d1e91979d058b03e408a23c08e5341c8aef7bcf"
	I0110 08:54:26.886939  327813 cri.go:96] found id: "8cc27bb53ccf039cf66d610eccb576f49bdbb73941b54ab4f3ae15da8ca459c9"
	I0110 08:54:26.886944  327813 cri.go:96] found id: "7edc134caaddf41c018c315644cbf965110ff668918e57324178f7efbba5809b"
	I0110 08:54:26.886950  327813 cri.go:96] found id: "4114f852d4bfbe76e80bef4884aabfe15cc867ff51c6109af0772dba003fc92e"
	I0110 08:54:26.886955  327813 cri.go:96] found id: "168230ea09edf78f7dbfce3346ce34e1aecc8ef7d88bf3480f0d898e0a09de74"
	I0110 08:54:26.886958  327813 cri.go:96] found id: "360adaeb3e778e316558a2aa06913d99ad52856234134b1d2a1f72db5b201faa"
	I0110 08:54:26.886962  327813 cri.go:96] found id: "7b204b358eeadb60a0ffc4d238b32e1f0914014ff649c11231353d088fbfd63e"
	I0110 08:54:26.886982  327813 cri.go:96] found id: "0bb140e695b05dc0360d1bfe69740a554acd68ace81fd737c1eef5fb1cd3c050"
	I0110 08:54:26.886985  327813 cri.go:96] found id: "f5a7c1082094a1bd85fd4d5d1b52374cfa4eb365f89b559cac9994c89013f73c"
	I0110 08:54:26.886988  327813 cri.go:96] found id: ""
	I0110 08:54:26.887030  327813 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 08:54:27.287423  327813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:54:27.300916  327813 pause.go:52] kubelet running: false
	I0110 08:54:27.300978  327813 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 08:54:27.452262  327813 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 08:54:27.452347  327813 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 08:54:27.533555  327813 cri.go:96] found id: "a3296b381169a6ddfab7afc7323a600112a626b64f20e93c01fd3bef1f408360"
	I0110 08:54:27.533583  327813 cri.go:96] found id: "3518fca86e68415819fef31a883c95cdfc0166747df2ea66ad4b89d8b5add329"
	I0110 08:54:27.533588  327813 cri.go:96] found id: "6642492d74528ed7ea62a82a2d1e91979d058b03e408a23c08e5341c8aef7bcf"
	I0110 08:54:27.533593  327813 cri.go:96] found id: "8cc27bb53ccf039cf66d610eccb576f49bdbb73941b54ab4f3ae15da8ca459c9"
	I0110 08:54:27.533596  327813 cri.go:96] found id: "7edc134caaddf41c018c315644cbf965110ff668918e57324178f7efbba5809b"
	I0110 08:54:27.533600  327813 cri.go:96] found id: "4114f852d4bfbe76e80bef4884aabfe15cc867ff51c6109af0772dba003fc92e"
	I0110 08:54:27.533602  327813 cri.go:96] found id: "168230ea09edf78f7dbfce3346ce34e1aecc8ef7d88bf3480f0d898e0a09de74"
	I0110 08:54:27.533605  327813 cri.go:96] found id: "360adaeb3e778e316558a2aa06913d99ad52856234134b1d2a1f72db5b201faa"
	I0110 08:54:27.533607  327813 cri.go:96] found id: "7b204b358eeadb60a0ffc4d238b32e1f0914014ff649c11231353d088fbfd63e"
	I0110 08:54:27.533614  327813 cri.go:96] found id: "0bb140e695b05dc0360d1bfe69740a554acd68ace81fd737c1eef5fb1cd3c050"
	I0110 08:54:27.533623  327813 cri.go:96] found id: "f5a7c1082094a1bd85fd4d5d1b52374cfa4eb365f89b559cac9994c89013f73c"
	I0110 08:54:27.533625  327813 cri.go:96] found id: ""
	I0110 08:54:27.533666  327813 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 08:54:27.866973  327813 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:54:27.885525  327813 pause.go:52] kubelet running: false
	I0110 08:54:27.885691  327813 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 08:54:28.109145  327813 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 08:54:28.109244  327813 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 08:54:28.203130  327813 cri.go:96] found id: "a3296b381169a6ddfab7afc7323a600112a626b64f20e93c01fd3bef1f408360"
	I0110 08:54:28.203154  327813 cri.go:96] found id: "3518fca86e68415819fef31a883c95cdfc0166747df2ea66ad4b89d8b5add329"
	I0110 08:54:28.203173  327813 cri.go:96] found id: "6642492d74528ed7ea62a82a2d1e91979d058b03e408a23c08e5341c8aef7bcf"
	I0110 08:54:28.203179  327813 cri.go:96] found id: "8cc27bb53ccf039cf66d610eccb576f49bdbb73941b54ab4f3ae15da8ca459c9"
	I0110 08:54:28.203184  327813 cri.go:96] found id: "7edc134caaddf41c018c315644cbf965110ff668918e57324178f7efbba5809b"
	I0110 08:54:28.203189  327813 cri.go:96] found id: "4114f852d4bfbe76e80bef4884aabfe15cc867ff51c6109af0772dba003fc92e"
	I0110 08:54:28.203193  327813 cri.go:96] found id: "168230ea09edf78f7dbfce3346ce34e1aecc8ef7d88bf3480f0d898e0a09de74"
	I0110 08:54:28.203203  327813 cri.go:96] found id: "360adaeb3e778e316558a2aa06913d99ad52856234134b1d2a1f72db5b201faa"
	I0110 08:54:28.203207  327813 cri.go:96] found id: "7b204b358eeadb60a0ffc4d238b32e1f0914014ff649c11231353d088fbfd63e"
	I0110 08:54:28.203215  327813 cri.go:96] found id: "0bb140e695b05dc0360d1bfe69740a554acd68ace81fd737c1eef5fb1cd3c050"
	I0110 08:54:28.203219  327813 cri.go:96] found id: "f5a7c1082094a1bd85fd4d5d1b52374cfa4eb365f89b559cac9994c89013f73c"
	I0110 08:54:28.203222  327813 cri.go:96] found id: ""
	I0110 08:54:28.203268  327813 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 08:54:28.226310  327813 out.go:203] 
	W0110 08:54:28.228498  327813 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:54:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:54:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 08:54:28.228525  327813 out.go:285] * 
	* 
	W0110 08:54:28.230962  327813 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 08:54:28.232232  327813 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-095312 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-095312
helpers_test.go:244: (dbg) docker inspect no-preload-095312:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b55d6d4fd1b202097df2085e713e014ab91340931a47b47e91685006d96552db",
	        "Created": "2026-01-10T08:52:10.613870109Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 314084,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T08:53:24.702221053Z",
	            "FinishedAt": "2026-01-10T08:53:23.781642135Z"
	        },
	        "Image": "sha256:e8ee619afcf8e9008c4fe3e7f2aba5fdbe7a9f0765053ca7dd53ab0df8ab02a5",
	        "ResolvConfPath": "/var/lib/docker/containers/b55d6d4fd1b202097df2085e713e014ab91340931a47b47e91685006d96552db/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b55d6d4fd1b202097df2085e713e014ab91340931a47b47e91685006d96552db/hostname",
	        "HostsPath": "/var/lib/docker/containers/b55d6d4fd1b202097df2085e713e014ab91340931a47b47e91685006d96552db/hosts",
	        "LogPath": "/var/lib/docker/containers/b55d6d4fd1b202097df2085e713e014ab91340931a47b47e91685006d96552db/b55d6d4fd1b202097df2085e713e014ab91340931a47b47e91685006d96552db-json.log",
	        "Name": "/no-preload-095312",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-095312:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-095312",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b55d6d4fd1b202097df2085e713e014ab91340931a47b47e91685006d96552db",
	                "LowerDir": "/var/lib/docker/overlay2/564ca6ef3c40a4c5b327dca6e24a3966439d268b426f5aa657f7c665c9e2702e-init/diff:/var/lib/docker/overlay2/97b14eb520192356c991915ce74cd6094e6ef948f398ac688b667ea18491ccde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/564ca6ef3c40a4c5b327dca6e24a3966439d268b426f5aa657f7c665c9e2702e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/564ca6ef3c40a4c5b327dca6e24a3966439d268b426f5aa657f7c665c9e2702e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/564ca6ef3c40a4c5b327dca6e24a3966439d268b426f5aa657f7c665c9e2702e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-095312",
	                "Source": "/var/lib/docker/volumes/no-preload-095312/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-095312",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-095312",
	                "name.minikube.sigs.k8s.io": "no-preload-095312",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "801ad71b8ddab750772fe0aa9298dc4995797a2611d5dc3b22dfe0bdb075a0d6",
	            "SandboxKey": "/var/run/docker/netns/801ad71b8dda",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-095312": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6f7b848a3ceb686079b5c418ee8c05b8e3ebe9353b6cbb1033bc657f18ffab5a",
	                    "EndpointID": "6dc3c8f2003ecc000806eb2171fdc465b3e7743d5483de7fc091c5914b9ccbeb",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "6e:6f:25:c7:c0:89",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-095312",
	                        "b55d6d4fd1b2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-095312 -n no-preload-095312
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-095312 -n no-preload-095312: exit status 2 (426.603481ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-095312 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-095312 logs -n 25: (1.749845657s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p flannel-472660 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ ssh     │ -p flannel-472660 sudo crio config                                                                                                                                                                                                            │ flannel-472660               │ jenkins │ v1.37.0 │ 10 Jan 26 08:52 UTC │ 10 Jan 26 08:52 UTC │
	│ delete  │ -p disable-driver-mounts-847921                                                                                                                                                                                                               │ disable-driver-mounts-847921 │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ start   │ -p default-k8s-diff-port-225354 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-225354 │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-093083 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-093083       │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-095312 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-095312            │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ stop    │ -p old-k8s-version-093083 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-093083       │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ stop    │ -p no-preload-095312 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-095312            │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-093083 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-093083       │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ start   │ -p old-k8s-version-093083 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-093083       │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:54 UTC │
	│ addons  │ enable dashboard -p no-preload-095312 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-095312            │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ start   │ -p no-preload-095312 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-095312            │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:54 UTC │
	│ addons  │ enable metrics-server -p embed-certs-072273 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-072273           │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ stop    │ -p embed-certs-072273 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-072273           │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ addons  │ enable dashboard -p embed-certs-072273 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-072273           │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ start   │ -p embed-certs-072273 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-072273           │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-225354 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-225354 │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-225354 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-225354 │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:54 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-225354 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-225354 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p default-k8s-diff-port-225354 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-225354 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ image   │ old-k8s-version-093083 image list --format=json                                                                                                                                                                                               │ old-k8s-version-093083       │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ pause   │ -p old-k8s-version-093083 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-093083       │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ delete  │ -p old-k8s-version-093083                                                                                                                                                                                                                     │ old-k8s-version-093083       │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ image   │ no-preload-095312 image list --format=json                                                                                                                                                                                                    │ no-preload-095312            │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ pause   │ -p no-preload-095312 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-095312            │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 08:54:10
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 08:54:10.770403  323767 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:54:10.770628  323767 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:54:10.770636  323767 out.go:374] Setting ErrFile to fd 2...
	I0110 08:54:10.770640  323767 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:54:10.770838  323767 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:54:10.771277  323767 out.go:368] Setting JSON to false
	I0110 08:54:10.772500  323767 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2203,"bootTime":1768033048,"procs":370,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 08:54:10.772550  323767 start.go:143] virtualization: kvm guest
	I0110 08:54:10.774548  323767 out.go:179] * [default-k8s-diff-port-225354] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 08:54:10.775930  323767 notify.go:221] Checking for updates...
	I0110 08:54:10.775987  323767 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 08:54:10.777327  323767 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 08:54:10.778691  323767 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:54:10.779869  323767 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-3641/.minikube
	I0110 08:54:10.780877  323767 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 08:54:10.782001  323767 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 08:54:10.783447  323767 config.go:182] Loaded profile config "default-k8s-diff-port-225354": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:54:10.784033  323767 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 08:54:10.808861  323767 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 08:54:10.808963  323767 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:54:10.865984  323767 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2026-01-10 08:54:10.855636003 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:54:10.866142  323767 docker.go:319] overlay module found
	I0110 08:54:10.867929  323767 out.go:179] * Using the docker driver based on existing profile
	I0110 08:54:10.869062  323767 start.go:309] selected driver: docker
	I0110 08:54:10.869077  323767 start.go:928] validating driver "docker" against &{Name:default-k8s-diff-port-225354 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-225354 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:54:10.869184  323767 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 08:54:10.869926  323767 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:54:10.925502  323767 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:72 OomKillDisable:false NGoroutines:75 SystemTime:2026-01-10 08:54:10.916316006 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:54:10.925807  323767 start_flags.go:1019] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 08:54:10.925849  323767 cni.go:84] Creating CNI manager for ""
	I0110 08:54:10.925905  323767 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 08:54:10.925939  323767 start.go:353] cluster config:
	{Name:default-k8s-diff-port-225354 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-225354 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:54:10.927799  323767 out.go:179] * Starting "default-k8s-diff-port-225354" primary control-plane node in "default-k8s-diff-port-225354" cluster
	I0110 08:54:10.928989  323767 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 08:54:10.930145  323767 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 08:54:10.931151  323767 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 08:54:10.931179  323767 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0110 08:54:10.931186  323767 cache.go:65] Caching tarball of preloaded images
	I0110 08:54:10.931185  323767 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 08:54:10.931262  323767 preload.go:251] Found /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0110 08:54:10.931274  323767 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 08:54:10.931366  323767 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/default-k8s-diff-port-225354/config.json ...
	I0110 08:54:10.952478  323767 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 08:54:10.952497  323767 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 08:54:10.952511  323767 cache.go:243] Successfully downloaded all kic artifacts
	I0110 08:54:10.952538  323767 start.go:360] acquireMachinesLock for default-k8s-diff-port-225354: {Name:mk6f4cf32f69b6a51f12f83adcd3cd0eb0ae8cbf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 08:54:10.952590  323767 start.go:364] duration metric: took 34.986µs to acquireMachinesLock for "default-k8s-diff-port-225354"
	I0110 08:54:10.952607  323767 start.go:96] Skipping create...Using existing machine configuration
	I0110 08:54:10.952614  323767 fix.go:54] fixHost starting: 
	I0110 08:54:10.952835  323767 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225354 --format={{.State.Status}}
	I0110 08:54:10.971677  323767 fix.go:112] recreateIfNeeded on default-k8s-diff-port-225354: state=Stopped err=<nil>
	W0110 08:54:10.971712  323767 fix.go:138] unexpected machine state, will restart: <nil>
	W0110 08:54:09.764911  319849 pod_ready.go:104] pod "coredns-7d764666f9-ss4nt" is not "Ready", error: <nil>
	W0110 08:54:12.264373  319849 pod_ready.go:104] pod "coredns-7d764666f9-ss4nt" is not "Ready", error: <nil>
	W0110 08:54:10.447913  313874 pod_ready.go:104] pod "coredns-7d764666f9-wpsnn" is not "Ready", error: <nil>
	I0110 08:54:12.447442  313874 pod_ready.go:94] pod "coredns-7d764666f9-wpsnn" is "Ready"
	I0110 08:54:12.447465  313874 pod_ready.go:86] duration metric: took 37.005475257s for pod "coredns-7d764666f9-wpsnn" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:54:12.450109  313874 pod_ready.go:83] waiting for pod "etcd-no-preload-095312" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:54:12.454228  313874 pod_ready.go:94] pod "etcd-no-preload-095312" is "Ready"
	I0110 08:54:12.454256  313874 pod_ready.go:86] duration metric: took 4.12175ms for pod "etcd-no-preload-095312" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:54:12.456424  313874 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-095312" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:54:12.460419  313874 pod_ready.go:94] pod "kube-apiserver-no-preload-095312" is "Ready"
	I0110 08:54:12.460442  313874 pod_ready.go:86] duration metric: took 3.995934ms for pod "kube-apiserver-no-preload-095312" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:54:12.462584  313874 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-095312" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:54:12.645718  313874 pod_ready.go:94] pod "kube-controller-manager-no-preload-095312" is "Ready"
	I0110 08:54:12.645758  313874 pod_ready.go:86] duration metric: took 183.153558ms for pod "kube-controller-manager-no-preload-095312" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:54:12.845858  313874 pod_ready.go:83] waiting for pod "kube-proxy-vrzf6" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:54:13.246243  313874 pod_ready.go:94] pod "kube-proxy-vrzf6" is "Ready"
	I0110 08:54:13.246269  313874 pod_ready.go:86] duration metric: took 400.386349ms for pod "kube-proxy-vrzf6" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:54:13.445337  313874 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-095312" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:54:13.845542  313874 pod_ready.go:94] pod "kube-scheduler-no-preload-095312" is "Ready"
	I0110 08:54:13.845566  313874 pod_ready.go:86] duration metric: took 400.206561ms for pod "kube-scheduler-no-preload-095312" in "kube-system" namespace to be "Ready" or be gone ...
	I0110 08:54:13.845577  313874 pod_ready.go:40] duration metric: took 38.40686605s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 08:54:13.890931  313874 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0110 08:54:13.892708  313874 out.go:179] * Done! kubectl is now configured to use "no-preload-095312" cluster and "default" namespace by default
	I0110 08:54:10.973787  323767 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-225354" ...
	I0110 08:54:10.973853  323767 cli_runner.go:164] Run: docker start default-k8s-diff-port-225354
	I0110 08:54:11.238333  323767 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225354 --format={{.State.Status}}
	I0110 08:54:11.258016  323767 kic.go:430] container "default-k8s-diff-port-225354" state is running.
	I0110 08:54:11.258559  323767 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-225354
	I0110 08:54:11.280398  323767 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/default-k8s-diff-port-225354/config.json ...
	I0110 08:54:11.280702  323767 machine.go:94] provisionDockerMachine start ...
	I0110 08:54:11.280828  323767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225354
	I0110 08:54:11.301429  323767 main.go:144] libmachine: Using SSH client type: native
	I0110 08:54:11.301668  323767 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0110 08:54:11.301681  323767 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 08:54:11.302419  323767 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42400->127.0.0.1:33123: read: connection reset by peer
	I0110 08:54:14.431592  323767 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-225354
	
	I0110 08:54:14.431635  323767 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-225354"
	I0110 08:54:14.431702  323767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225354
	I0110 08:54:14.451318  323767 main.go:144] libmachine: Using SSH client type: native
	I0110 08:54:14.451515  323767 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0110 08:54:14.451527  323767 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-225354 && echo "default-k8s-diff-port-225354" | sudo tee /etc/hostname
	I0110 08:54:14.589004  323767 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-225354
	
	I0110 08:54:14.589083  323767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225354
	I0110 08:54:14.607514  323767 main.go:144] libmachine: Using SSH client type: native
	I0110 08:54:14.607721  323767 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0110 08:54:14.607763  323767 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-225354' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-225354/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-225354' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 08:54:14.737006  323767 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 08:54:14.737035  323767 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-3641/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-3641/.minikube}
	I0110 08:54:14.737067  323767 ubuntu.go:190] setting up certificates
	I0110 08:54:14.737089  323767 provision.go:84] configureAuth start
	I0110 08:54:14.737149  323767 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-225354
	I0110 08:54:14.756076  323767 provision.go:143] copyHostCerts
	I0110 08:54:14.756148  323767 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-3641/.minikube/ca.pem, removing ...
	I0110 08:54:14.756164  323767 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-3641/.minikube/ca.pem
	I0110 08:54:14.756236  323767 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-3641/.minikube/ca.pem (1078 bytes)
	I0110 08:54:14.756404  323767 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-3641/.minikube/cert.pem, removing ...
	I0110 08:54:14.756417  323767 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-3641/.minikube/cert.pem
	I0110 08:54:14.756450  323767 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-3641/.minikube/cert.pem (1123 bytes)
	I0110 08:54:14.756528  323767 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-3641/.minikube/key.pem, removing ...
	I0110 08:54:14.756537  323767 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-3641/.minikube/key.pem
	I0110 08:54:14.756563  323767 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-3641/.minikube/key.pem (1675 bytes)
	I0110 08:54:14.756647  323767 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-3641/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-225354 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-225354 localhost minikube]
	I0110 08:54:14.793509  323767 provision.go:177] copyRemoteCerts
	I0110 08:54:14.793560  323767 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 08:54:14.793595  323767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225354
	I0110 08:54:14.813116  323767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/default-k8s-diff-port-225354/id_rsa Username:docker}
	I0110 08:54:14.905947  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0110 08:54:14.924947  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0110 08:54:14.942427  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0110 08:54:14.959358  323767 provision.go:87] duration metric: took 222.24641ms to configureAuth
	I0110 08:54:14.959385  323767 ubuntu.go:206] setting minikube options for container-runtime
	I0110 08:54:14.959541  323767 config.go:182] Loaded profile config "default-k8s-diff-port-225354": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:54:14.959639  323767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225354
	I0110 08:54:14.978423  323767 main.go:144] libmachine: Using SSH client type: native
	I0110 08:54:14.978687  323767 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I0110 08:54:14.978709  323767 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 08:54:15.292502  323767 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 08:54:15.292533  323767 machine.go:97] duration metric: took 4.011809959s to provisionDockerMachine
	I0110 08:54:15.292549  323767 start.go:293] postStartSetup for "default-k8s-diff-port-225354" (driver="docker")
	I0110 08:54:15.292564  323767 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 08:54:15.292642  323767 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 08:54:15.292693  323767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225354
	I0110 08:54:15.314158  323767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/default-k8s-diff-port-225354/id_rsa Username:docker}
	I0110 08:54:15.408580  323767 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 08:54:15.412461  323767 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 08:54:15.412484  323767 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 08:54:15.412494  323767 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-3641/.minikube/addons for local assets ...
	I0110 08:54:15.412543  323767 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-3641/.minikube/files for local assets ...
	I0110 08:54:15.412618  323767 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-3641/.minikube/files/etc/ssl/certs/71832.pem -> 71832.pem in /etc/ssl/certs
	I0110 08:54:15.412701  323767 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 08:54:15.420257  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/files/etc/ssl/certs/71832.pem --> /etc/ssl/certs/71832.pem (1708 bytes)
	I0110 08:54:15.437907  323767 start.go:296] duration metric: took 145.342731ms for postStartSetup
	I0110 08:54:15.437987  323767 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 08:54:15.438056  323767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225354
	I0110 08:54:15.456452  323767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/default-k8s-diff-port-225354/id_rsa Username:docker}
	I0110 08:54:15.547075  323767 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 08:54:15.551926  323767 fix.go:56] duration metric: took 4.599307206s for fixHost
	I0110 08:54:15.551952  323767 start.go:83] releasing machines lock for "default-k8s-diff-port-225354", held for 4.599352578s
	I0110 08:54:15.552009  323767 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-225354
	I0110 08:54:15.571390  323767 ssh_runner.go:195] Run: cat /version.json
	I0110 08:54:15.571479  323767 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 08:54:15.571492  323767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225354
	I0110 08:54:15.571536  323767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225354
	I0110 08:54:15.590047  323767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/default-k8s-diff-port-225354/id_rsa Username:docker}
	I0110 08:54:15.591127  323767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/default-k8s-diff-port-225354/id_rsa Username:docker}
	I0110 08:54:15.681002  323767 ssh_runner.go:195] Run: systemctl --version
	I0110 08:54:15.736158  323767 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 08:54:15.771411  323767 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 08:54:15.776401  323767 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 08:54:15.776474  323767 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 08:54:15.784643  323767 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 08:54:15.784665  323767 start.go:496] detecting cgroup driver to use...
	I0110 08:54:15.784700  323767 detect.go:178] detected "systemd" cgroup driver on host os
	I0110 08:54:15.784774  323767 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 08:54:15.799081  323767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 08:54:15.812276  323767 docker.go:218] disabling cri-docker service (if available) ...
	I0110 08:54:15.812336  323767 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 08:54:15.826890  323767 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 08:54:15.839388  323767 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 08:54:15.922811  323767 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 08:54:15.998942  323767 docker.go:234] disabling docker service ...
	I0110 08:54:15.999015  323767 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 08:54:16.014407  323767 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 08:54:16.026725  323767 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 08:54:16.107584  323767 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 08:54:16.187958  323767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 08:54:16.200970  323767 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 08:54:16.215874  323767 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 08:54:16.215939  323767 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:54:16.225363  323767 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0110 08:54:16.225421  323767 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:54:16.234046  323767 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:54:16.242715  323767 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:54:16.251754  323767 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 08:54:16.260507  323767 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:54:16.270006  323767 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:54:16.278297  323767 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:54:16.287021  323767 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 08:54:16.295062  323767 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 08:54:16.302531  323767 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:54:16.386036  323767 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 08:54:16.519040  323767 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 08:54:16.519096  323767 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 08:54:16.523210  323767 start.go:574] Will wait 60s for crictl version
	I0110 08:54:16.523262  323767 ssh_runner.go:195] Run: which crictl
	I0110 08:54:16.526960  323767 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 08:54:16.555412  323767 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 08:54:16.555483  323767 ssh_runner.go:195] Run: crio --version
	I0110 08:54:16.583901  323767 ssh_runner.go:195] Run: crio --version
	I0110 08:54:16.612570  323767 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 08:54:16.613832  323767 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-225354 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 08:54:16.631782  323767 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0110 08:54:16.636032  323767 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 08:54:16.646878  323767 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-225354 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-225354 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 08:54:16.646997  323767 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 08:54:16.647043  323767 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 08:54:16.681410  323767 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 08:54:16.681432  323767 crio.go:433] Images already preloaded, skipping extraction
	I0110 08:54:16.681488  323767 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 08:54:16.709542  323767 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 08:54:16.709564  323767 cache_images.go:86] Images are preloaded, skipping loading
	I0110 08:54:16.709578  323767 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.35.0 crio true true} ...
	I0110 08:54:16.709686  323767 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-225354 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-225354 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 08:54:16.709773  323767 ssh_runner.go:195] Run: crio config
	I0110 08:54:16.757583  323767 cni.go:84] Creating CNI manager for ""
	I0110 08:54:16.757609  323767 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 08:54:16.757627  323767 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I0110 08:54:16.757647  323767 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-225354 NodeName:default-k8s-diff-port-225354 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 08:54:16.757801  323767 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-225354"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 08:54:16.757897  323767 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 08:54:16.767516  323767 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 08:54:16.767578  323767 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 08:54:16.775454  323767 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0110 08:54:16.788355  323767 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 08:54:16.801342  323767 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0110 08:54:16.814642  323767 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0110 08:54:16.819369  323767 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 08:54:16.829406  323767 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:54:16.909443  323767 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 08:54:16.933270  323767 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/default-k8s-diff-port-225354 for IP: 192.168.85.2
	I0110 08:54:16.933296  323767 certs.go:195] generating shared ca certs ...
	I0110 08:54:16.933320  323767 certs.go:227] acquiring lock for ca certs: {Name:mk00e261408d0e9fd9be39128613c5110a764de2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:54:16.933503  323767 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-3641/.minikube/ca.key
	I0110 08:54:16.933570  323767 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-3641/.minikube/proxy-client-ca.key
	I0110 08:54:16.933585  323767 certs.go:257] generating profile certs ...
	I0110 08:54:16.933711  323767 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/default-k8s-diff-port-225354/client.key
	I0110 08:54:16.933843  323767 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/default-k8s-diff-port-225354/apiserver.key.b2f93262
	I0110 08:54:16.933914  323767 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/default-k8s-diff-port-225354/proxy-client.key
	I0110 08:54:16.934071  323767 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/7183.pem (1338 bytes)
	W0110 08:54:16.934116  323767 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-3641/.minikube/certs/7183_empty.pem, impossibly tiny 0 bytes
	I0110 08:54:16.934130  323767 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca-key.pem (1675 bytes)
	I0110 08:54:16.934171  323767 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem (1078 bytes)
	I0110 08:54:16.934216  323767 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/cert.pem (1123 bytes)
	I0110 08:54:16.934253  323767 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/key.pem (1675 bytes)
	I0110 08:54:16.934322  323767 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/files/etc/ssl/certs/71832.pem (1708 bytes)
	I0110 08:54:16.935216  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 08:54:16.954242  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0110 08:54:16.973102  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 08:54:16.991857  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 08:54:17.016862  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/default-k8s-diff-port-225354/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0110 08:54:17.038329  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/default-k8s-diff-port-225354/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0110 08:54:17.058014  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/default-k8s-diff-port-225354/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 08:54:17.078592  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/default-k8s-diff-port-225354/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0110 08:54:17.097918  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/certs/7183.pem --> /usr/share/ca-certificates/7183.pem (1338 bytes)
	I0110 08:54:17.120708  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/files/etc/ssl/certs/71832.pem --> /usr/share/ca-certificates/71832.pem (1708 bytes)
	I0110 08:54:17.139003  323767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 08:54:17.156524  323767 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 08:54:17.168668  323767 ssh_runner.go:195] Run: openssl version
	I0110 08:54:17.175368  323767 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7183.pem
	I0110 08:54:17.182747  323767 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7183.pem /etc/ssl/certs/7183.pem
	I0110 08:54:17.190691  323767 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7183.pem
	I0110 08:54:17.194457  323767 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 08:23 /usr/share/ca-certificates/7183.pem
	I0110 08:54:17.194502  323767 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7183.pem
	I0110 08:54:17.228543  323767 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 08:54:17.236153  323767 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/71832.pem
	I0110 08:54:17.243355  323767 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/71832.pem /etc/ssl/certs/71832.pem
	I0110 08:54:17.250614  323767 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71832.pem
	I0110 08:54:17.254314  323767 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 08:23 /usr/share/ca-certificates/71832.pem
	I0110 08:54:17.254360  323767 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71832.pem
	I0110 08:54:17.291080  323767 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 08:54:17.299045  323767 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:54:17.306390  323767 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 08:54:17.314035  323767 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:54:17.317953  323767 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:54:17.318000  323767 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:54:17.355000  323767 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 08:54:17.362980  323767 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 08:54:17.367171  323767 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 08:54:17.402450  323767 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 08:54:17.439845  323767 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 08:54:17.488989  323767 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 08:54:17.547905  323767 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 08:54:17.597239  323767 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 08:54:17.641402  323767 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-225354 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:default-k8s-diff-port-225354 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:54:17.641512  323767 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 08:54:17.641568  323767 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 08:54:17.671428  323767 cri.go:96] found id: "85fbcb73a888a911d321e3d1ed0152e1aa93447d76ca22015d3a09638892f2af"
	I0110 08:54:17.671452  323767 cri.go:96] found id: "6de83a52f42b4d00ef4463aa0a10635035e611d92fcb5f692497cd23e40d7676"
	I0110 08:54:17.671467  323767 cri.go:96] found id: "767f06c98be9d86d55d0cbaaa375406db22fd312258e490654cdcba950d47c27"
	I0110 08:54:17.671472  323767 cri.go:96] found id: "5055dfe1945b7e474350afd64ade8604c08027a381ce57320b00e445ef977a5c"
	I0110 08:54:17.671475  323767 cri.go:96] found id: ""
	I0110 08:54:17.671511  323767 ssh_runner.go:195] Run: sudo runc list -f json
	W0110 08:54:17.683716  323767 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:54:17Z" level=error msg="open /run/runc: no such file or directory"
	I0110 08:54:17.683818  323767 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 08:54:17.692500  323767 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 08:54:17.692519  323767 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 08:54:17.692563  323767 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 08:54:17.700875  323767 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 08:54:17.702210  323767 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-225354" does not appear in /home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:54:17.703105  323767 kubeconfig.go:62] /home/jenkins/minikube-integration/22427-3641/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-225354" cluster setting kubeconfig missing "default-k8s-diff-port-225354" context setting]
	I0110 08:54:17.704607  323767 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/kubeconfig: {Name:mk0d29b3b0ee1fd71729aff31f901be50a2d0664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:54:17.706397  323767 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 08:54:17.714287  323767 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I0110 08:54:17.714312  323767 kubeadm.go:602] duration metric: took 21.788411ms to restartPrimaryControlPlane
	I0110 08:54:17.714319  323767 kubeadm.go:403] duration metric: took 72.928609ms to StartCluster
	I0110 08:54:17.714335  323767 settings.go:142] acquiring lock: {Name:mkbb32fc6441ceb31ce2923ea8999f8375298f08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:54:17.714398  323767 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:54:17.715957  323767 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/kubeconfig: {Name:mk0d29b3b0ee1fd71729aff31f901be50a2d0664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:54:17.716233  323767 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 08:54:17.716303  323767 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 08:54:17.716385  323767 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-225354"
	I0110 08:54:17.716404  323767 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-225354"
	W0110 08:54:17.716410  323767 addons.go:248] addon storage-provisioner should already be in state true
	I0110 08:54:17.716433  323767 host.go:66] Checking if "default-k8s-diff-port-225354" exists ...
	I0110 08:54:17.716458  323767 config.go:182] Loaded profile config "default-k8s-diff-port-225354": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:54:17.716558  323767 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-225354"
	I0110 08:54:17.716606  323767 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-225354"
	I0110 08:54:17.716526  323767 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-225354"
	I0110 08:54:17.716694  323767 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-225354"
	W0110 08:54:17.716710  323767 addons.go:248] addon dashboard should already be in state true
	I0110 08:54:17.716747  323767 host.go:66] Checking if "default-k8s-diff-port-225354" exists ...
	I0110 08:54:17.716965  323767 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225354 --format={{.State.Status}}
	I0110 08:54:17.716965  323767 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225354 --format={{.State.Status}}
	I0110 08:54:17.717413  323767 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225354 --format={{.State.Status}}
	I0110 08:54:17.721904  323767 out.go:179] * Verifying Kubernetes components...
	I0110 08:54:17.723462  323767 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:54:17.745550  323767 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 08:54:17.745608  323767 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0110 08:54:17.746683  323767 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 08:54:17.746701  323767 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0110 08:54:17.746704  323767 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	W0110 08:54:14.265096  319849 pod_ready.go:104] pod "coredns-7d764666f9-ss4nt" is not "Ready", error: <nil>
	W0110 08:54:16.764367  319849 pod_ready.go:104] pod "coredns-7d764666f9-ss4nt" is not "Ready", error: <nil>
	I0110 08:54:17.746812  323767 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-225354"
	W0110 08:54:17.746828  323767 addons.go:248] addon default-storageclass should already be in state true
	I0110 08:54:17.746787  323767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225354
	I0110 08:54:17.746853  323767 host.go:66] Checking if "default-k8s-diff-port-225354" exists ...
	I0110 08:54:17.747311  323767 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225354 --format={{.State.Status}}
	I0110 08:54:17.747889  323767 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0110 08:54:17.747930  323767 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0110 08:54:17.747987  323767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225354
	I0110 08:54:17.783552  323767 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 08:54:17.783576  323767 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 08:54:17.783630  323767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225354
	I0110 08:54:17.783875  323767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/default-k8s-diff-port-225354/id_rsa Username:docker}
	I0110 08:54:17.785980  323767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/default-k8s-diff-port-225354/id_rsa Username:docker}
	I0110 08:54:17.810678  323767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/default-k8s-diff-port-225354/id_rsa Username:docker}
	I0110 08:54:17.872366  323767 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 08:54:17.887886  323767 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-225354" to be "Ready" ...
	I0110 08:54:17.898099  323767 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 08:54:17.903229  323767 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0110 08:54:17.903253  323767 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0110 08:54:17.917337  323767 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0110 08:54:17.917360  323767 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0110 08:54:17.921609  323767 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 08:54:17.933302  323767 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0110 08:54:17.933326  323767 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0110 08:54:17.947626  323767 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0110 08:54:17.947646  323767 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0110 08:54:17.960408  323767 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0110 08:54:17.960475  323767 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0110 08:54:17.974266  323767 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0110 08:54:17.974295  323767 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0110 08:54:17.986776  323767 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0110 08:54:17.986799  323767 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0110 08:54:17.999337  323767 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0110 08:54:17.999358  323767 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0110 08:54:18.011953  323767 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 08:54:18.011978  323767 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0110 08:54:18.024936  323767 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 08:54:19.541880  323767 node_ready.go:49] node "default-k8s-diff-port-225354" is "Ready"
	I0110 08:54:19.541922  323767 node_ready.go:38] duration metric: took 1.653997821s for node "default-k8s-diff-port-225354" to be "Ready" ...
	I0110 08:54:19.541939  323767 api_server.go:52] waiting for apiserver process to appear ...
	I0110 08:54:19.541994  323767 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 08:54:20.082614  323767 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.184477569s)
	I0110 08:54:20.082684  323767 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.161045842s)
	I0110 08:54:20.082816  323767 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.057848718s)
	I0110 08:54:20.082870  323767 api_server.go:72] duration metric: took 2.366605517s to wait for apiserver process to appear ...
	I0110 08:54:20.082941  323767 api_server.go:88] waiting for apiserver healthz status ...
	I0110 08:54:20.082962  323767 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0110 08:54:20.084235  323767 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-225354 addons enable metrics-server
	
	I0110 08:54:20.087836  323767 api_server.go:325] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 08:54:20.087861  323767 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0110 08:54:20.091293  323767 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I0110 08:54:20.092886  323767 addons.go:530] duration metric: took 2.376597654s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I0110 08:54:20.583799  323767 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0110 08:54:20.588631  323767 api_server.go:325] https://192.168.85.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 08:54:20.588668  323767 api_server.go:103] status: https://192.168.85.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 08:54:18.766964  319849 pod_ready.go:104] pod "coredns-7d764666f9-ss4nt" is not "Ready", error: <nil>
	W0110 08:54:21.265027  319849 pod_ready.go:104] pod "coredns-7d764666f9-ss4nt" is not "Ready", error: <nil>
	W0110 08:54:23.265352  319849 pod_ready.go:104] pod "coredns-7d764666f9-ss4nt" is not "Ready", error: <nil>
	I0110 08:54:21.083006  323767 api_server.go:299] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I0110 08:54:21.090246  323767 api_server.go:325] https://192.168.85.2:8444/healthz returned 200:
	ok
	I0110 08:54:21.091816  323767 api_server.go:141] control plane version: v1.35.0
	I0110 08:54:21.091850  323767 api_server.go:131] duration metric: took 1.00890096s to wait for apiserver health ...
	I0110 08:54:21.091862  323767 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 08:54:21.095859  323767 system_pods.go:59] 8 kube-system pods found
	I0110 08:54:21.095913  323767 system_pods.go:61] "coredns-7d764666f9-cjklg" [7e79f65c-0d71-4a6d-9745-cabfb1e2510a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 08:54:21.095926  323767 system_pods.go:61] "etcd-default-k8s-diff-port-225354" [58efb62e-7835-4991-bcf8-fb86873a0d32] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 08:54:21.095941  323767 system_pods.go:61] "kindnet-sd4nd" [24ae2cd1-793e-4c82-b6f7-eace35334eba] Running
	I0110 08:54:21.095950  323767 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-225354" [d8adad73-375e-48aa-ad19-7e1d6b156061] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 08:54:21.095960  323767 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-225354" [b8c56aaa-f028-436f-827f-95e333488bcd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 08:54:21.095968  323767 system_pods.go:61] "kube-proxy-fbfrd" [ca5dfc30-5416-4215-a090-edbc4a878737] Running
	I0110 08:54:21.095977  323767 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-225354" [669c7ec9-d8ee-4adf-a854-4b4424608a6b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 08:54:21.095984  323767 system_pods.go:61] "storage-provisioner" [740928ba-bed5-4e17-bbba-ff0e40407f88] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0110 08:54:21.095993  323767 system_pods.go:74] duration metric: took 4.122614ms to wait for pod list to return data ...
	I0110 08:54:21.096003  323767 default_sa.go:34] waiting for default service account to be created ...
	I0110 08:54:21.098680  323767 default_sa.go:45] found service account: "default"
	I0110 08:54:21.098704  323767 default_sa.go:55] duration metric: took 2.693667ms for default service account to be created ...
	I0110 08:54:21.098714  323767 system_pods.go:116] waiting for k8s-apps to be running ...
	I0110 08:54:21.108724  323767 system_pods.go:86] 8 kube-system pods found
	I0110 08:54:21.108768  323767 system_pods.go:89] "coredns-7d764666f9-cjklg" [7e79f65c-0d71-4a6d-9745-cabfb1e2510a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0110 08:54:21.108781  323767 system_pods.go:89] "etcd-default-k8s-diff-port-225354" [58efb62e-7835-4991-bcf8-fb86873a0d32] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 08:54:21.108793  323767 system_pods.go:89] "kindnet-sd4nd" [24ae2cd1-793e-4c82-b6f7-eace35334eba] Running
	I0110 08:54:21.108830  323767 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-225354" [d8adad73-375e-48aa-ad19-7e1d6b156061] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 08:54:21.108842  323767 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-225354" [b8c56aaa-f028-436f-827f-95e333488bcd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 08:54:21.108848  323767 system_pods.go:89] "kube-proxy-fbfrd" [ca5dfc30-5416-4215-a090-edbc4a878737] Running
	I0110 08:54:21.108856  323767 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-225354" [669c7ec9-d8ee-4adf-a854-4b4424608a6b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 08:54:21.108861  323767 system_pods.go:89] "storage-provisioner" [740928ba-bed5-4e17-bbba-ff0e40407f88] Running
	I0110 08:54:21.108870  323767 system_pods.go:126] duration metric: took 10.1494ms to wait for k8s-apps to be running ...
	I0110 08:54:21.108879  323767 system_svc.go:44] waiting for kubelet service to be running ....
	I0110 08:54:21.108946  323767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:54:21.130456  323767 system_svc.go:56] duration metric: took 21.568355ms WaitForService to wait for kubelet
	I0110 08:54:21.130493  323767 kubeadm.go:587] duration metric: took 3.41423014s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0110 08:54:21.130513  323767 node_conditions.go:102] verifying NodePressure condition ...
	I0110 08:54:21.134443  323767 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0110 08:54:21.134466  323767 node_conditions.go:123] node cpu capacity is 8
	I0110 08:54:21.134480  323767 node_conditions.go:105] duration metric: took 3.961864ms to run NodePressure ...
	I0110 08:54:21.134508  323767 start.go:242] waiting for startup goroutines ...
	I0110 08:54:21.134522  323767 start.go:247] waiting for cluster config update ...
	I0110 08:54:21.134534  323767 start.go:256] writing updated cluster config ...
	I0110 08:54:21.134874  323767 ssh_runner.go:195] Run: rm -f paused
	I0110 08:54:21.138942  323767 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0110 08:54:21.143278  323767 pod_ready.go:83] waiting for pod "coredns-7d764666f9-cjklg" in "kube-system" namespace to be "Ready" or be gone ...
	W0110 08:54:23.148402  323767 pod_ready.go:104] pod "coredns-7d764666f9-cjklg" is not "Ready", error: <nil>
	W0110 08:54:25.154912  323767 pod_ready.go:104] pod "coredns-7d764666f9-cjklg" is not "Ready", error: <nil>
	W0110 08:54:25.766716  319849 pod_ready.go:104] pod "coredns-7d764666f9-ss4nt" is not "Ready", error: <nil>
	W0110 08:54:28.265681  319849 pod_ready.go:104] pod "coredns-7d764666f9-ss4nt" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Jan 10 08:53:51 no-preload-095312 crio[574]: time="2026-01-10T08:53:51.320100864Z" level=info msg="Started container" PID=1783 containerID=bb71638d1ffd745e414fef254e141528b61c5b817612e65764b764ecbf2972ad description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tfl6w/dashboard-metrics-scraper id=84763b24-7a38-4e2d-943b-24c5f04ce3c6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f506d85cee64f2ab636c809387ed6860fbe55e80b22ff518d240de6ccea90034
	Jan 10 08:53:52 no-preload-095312 crio[574]: time="2026-01-10T08:53:52.318773683Z" level=info msg="Removing container: 53f33e6ab8a015dc2ed1c758e5d4ea04720d91b3be976d94396499a14f4aa4e0" id=138783fa-bab7-466d-8ffc-aa1fa7d6fb27 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 08:53:52 no-preload-095312 crio[574]: time="2026-01-10T08:53:52.329358323Z" level=info msg="Removed container 53f33e6ab8a015dc2ed1c758e5d4ea04720d91b3be976d94396499a14f4aa4e0: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tfl6w/dashboard-metrics-scraper" id=138783fa-bab7-466d-8ffc-aa1fa7d6fb27 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 08:54:05 no-preload-095312 crio[574]: time="2026-01-10T08:54:05.35430583Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=07c23373-92eb-4aa2-80e6-3cd54e8c2f1b name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:54:05 no-preload-095312 crio[574]: time="2026-01-10T08:54:05.355478797Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=fc399804-8326-4196-bff1-9d41867813bb name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:54:05 no-preload-095312 crio[574]: time="2026-01-10T08:54:05.356580047Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=5346da91-25d7-4db8-aa32-b5036fc55378 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:54:05 no-preload-095312 crio[574]: time="2026-01-10T08:54:05.356720534Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:05 no-preload-095312 crio[574]: time="2026-01-10T08:54:05.363158569Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:05 no-preload-095312 crio[574]: time="2026-01-10T08:54:05.363364951Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/126d3343e28d3efee29eeb6d7828e780d71931b6cf449cd44893ddc7cf277c89/merged/etc/passwd: no such file or directory"
	Jan 10 08:54:05 no-preload-095312 crio[574]: time="2026-01-10T08:54:05.363395904Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/126d3343e28d3efee29eeb6d7828e780d71931b6cf449cd44893ddc7cf277c89/merged/etc/group: no such file or directory"
	Jan 10 08:54:05 no-preload-095312 crio[574]: time="2026-01-10T08:54:05.364088114Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:05 no-preload-095312 crio[574]: time="2026-01-10T08:54:05.408351789Z" level=info msg="Created container a3296b381169a6ddfab7afc7323a600112a626b64f20e93c01fd3bef1f408360: kube-system/storage-provisioner/storage-provisioner" id=5346da91-25d7-4db8-aa32-b5036fc55378 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:54:05 no-preload-095312 crio[574]: time="2026-01-10T08:54:05.409087558Z" level=info msg="Starting container: a3296b381169a6ddfab7afc7323a600112a626b64f20e93c01fd3bef1f408360" id=0c33e100-e5c4-479a-90cb-fd19a714aca2 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 08:54:05 no-preload-095312 crio[574]: time="2026-01-10T08:54:05.41140952Z" level=info msg="Started container" PID=1797 containerID=a3296b381169a6ddfab7afc7323a600112a626b64f20e93c01fd3bef1f408360 description=kube-system/storage-provisioner/storage-provisioner id=0c33e100-e5c4-479a-90cb-fd19a714aca2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e3ef327a1cc3409e4f43ebe44ac5cfc5dc24a8ed6018e4856a555462c65479e3
	Jan 10 08:54:18 no-preload-095312 crio[574]: time="2026-01-10T08:54:18.239244007Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f04119fc-d714-465f-9f0c-6559ca89b71c name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:54:18 no-preload-095312 crio[574]: time="2026-01-10T08:54:18.240271924Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2ed34086-4a7b-4968-a4f6-29850d04f78f name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:54:18 no-preload-095312 crio[574]: time="2026-01-10T08:54:18.241299274Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tfl6w/dashboard-metrics-scraper" id=d596375c-7269-47c0-bc1e-0360bc121ff1 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:54:18 no-preload-095312 crio[574]: time="2026-01-10T08:54:18.24143722Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:18 no-preload-095312 crio[574]: time="2026-01-10T08:54:18.24771949Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:18 no-preload-095312 crio[574]: time="2026-01-10T08:54:18.2484449Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:18 no-preload-095312 crio[574]: time="2026-01-10T08:54:18.279587708Z" level=info msg="Created container 0bb140e695b05dc0360d1bfe69740a554acd68ace81fd737c1eef5fb1cd3c050: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tfl6w/dashboard-metrics-scraper" id=d596375c-7269-47c0-bc1e-0360bc121ff1 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:54:18 no-preload-095312 crio[574]: time="2026-01-10T08:54:18.280285775Z" level=info msg="Starting container: 0bb140e695b05dc0360d1bfe69740a554acd68ace81fd737c1eef5fb1cd3c050" id=95d2b712-080d-43c0-a767-67cf1bee05a3 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 08:54:18 no-preload-095312 crio[574]: time="2026-01-10T08:54:18.282098422Z" level=info msg="Started container" PID=1833 containerID=0bb140e695b05dc0360d1bfe69740a554acd68ace81fd737c1eef5fb1cd3c050 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tfl6w/dashboard-metrics-scraper id=95d2b712-080d-43c0-a767-67cf1bee05a3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f506d85cee64f2ab636c809387ed6860fbe55e80b22ff518d240de6ccea90034
	Jan 10 08:54:18 no-preload-095312 crio[574]: time="2026-01-10T08:54:18.392133829Z" level=info msg="Removing container: bb71638d1ffd745e414fef254e141528b61c5b817612e65764b764ecbf2972ad" id=f8d750b5-04b4-4e09-9179-783700a0c964 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 08:54:18 no-preload-095312 crio[574]: time="2026-01-10T08:54:18.402210935Z" level=info msg="Removed container bb71638d1ffd745e414fef254e141528b61c5b817612e65764b764ecbf2972ad: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tfl6w/dashboard-metrics-scraper" id=f8d750b5-04b4-4e09-9179-783700a0c964 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	0bb140e695b05       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago      Exited              dashboard-metrics-scraper   3                   f506d85cee64f       dashboard-metrics-scraper-867fb5f87b-tfl6w   kubernetes-dashboard
	a3296b381169a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   e3ef327a1cc34       storage-provisioner                          kube-system
	f5a7c1082094a       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago      Running             kubernetes-dashboard        0                   feefa01e2baac       kubernetes-dashboard-b84665fb8-pjbvx         kubernetes-dashboard
	8cd0cf6025ad2       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   0459af62d695a       busybox                                      default
	3518fca86e684       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           54 seconds ago      Running             coredns                     0                   05c600782658d       coredns-7d764666f9-wpsnn                     kube-system
	6642492d74528       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   e3ef327a1cc34       storage-provisioner                          kube-system
	8cc27bb53ccf0       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           54 seconds ago      Running             kindnet-cni                 0                   bc1a5427bb6dc       kindnet-tzmwv                                kube-system
	7edc134caaddf       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           54 seconds ago      Running             kube-proxy                  0                   716fac7e1cb28       kube-proxy-vrzf6                             kube-system
	4114f852d4bfb       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           57 seconds ago      Running             kube-apiserver              0                   e1543c35c1417       kube-apiserver-no-preload-095312             kube-system
	168230ea09edf       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           57 seconds ago      Running             kube-scheduler              0                   c968b731a63ab       kube-scheduler-no-preload-095312             kube-system
	360adaeb3e778       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           57 seconds ago      Running             etcd                        0                   d4edd7936eb4e       etcd-no-preload-095312                       kube-system
	7b204b358eead       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           57 seconds ago      Running             kube-controller-manager     0                   6904609f6785f       kube-controller-manager-no-preload-095312    kube-system
	
	
	==> coredns [3518fca86e68415819fef31a883c95cdfc0166747df2ea66ad4b89d8b5add329] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:59916 - 37642 "HINFO IN 1664412536841815701.8301507622720825459. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020738611s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-095312
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-095312
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee
	                    minikube.k8s.io/name=no-preload-095312
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T08_52_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 08:52:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-095312
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 08:54:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 08:54:04 +0000   Sat, 10 Jan 2026 08:52:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 08:54:04 +0000   Sat, 10 Jan 2026 08:52:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 08:54:04 +0000   Sat, 10 Jan 2026 08:52:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 08:54:04 +0000   Sat, 10 Jan 2026 08:52:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-095312
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 73d0d7ed7bc783a051b563196960b035
	  System UUID:                b2e47543-110f-4155-be9c-62c4fc9e6c69
	  Boot ID:                    7c12f492-59b6-440b-986c-741ece916a23
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-7d764666f9-wpsnn                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-no-preload-095312                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         116s
	  kube-system                 kindnet-tzmwv                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-no-preload-095312              250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-no-preload-095312     200m (2%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-vrzf6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-no-preload-095312              100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-tfl6w    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-pjbvx          0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  112s  node-controller  Node no-preload-095312 event: Registered Node no-preload-095312 in Controller
	  Normal  RegisteredNode  52s   node-controller  Node no-preload-095312 event: Registered Node no-preload-095312 in Controller
	
	
	==> dmesg <==
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 4a 03 46 88 66 4e 08 06
	[  +6.406566] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7a 4f c9 a1 45 f4 08 06
	[  +0.001429] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca 5b 5b a9 7c eb 08 06
	[ +24.002020] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7a 6f 49 7e 9d d1 08 06
	[  +0.000366] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e 3a ac 18 98 c9 08 06
	[Jan10 08:52] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4a 05 5c 86 cf df 08 06
	[  +0.000337] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 7a 4f c9 a1 45 f4 08 06
	[  +0.000602] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca 5b 5b a9 7c eb 08 06
	[  +0.739059] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e 32 81 dc 0a 7c 08 06
	[  +1.988700] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 6e ec e4 aa 3d 08 06
	[ +20.040962] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 ac f8 cf fb 84 08 06
	[  +0.000375] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 6e ec e4 aa 3d 08 06
	[  +0.000915] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 32 81 dc 0a 7c 08 06
	
	
	==> etcd [360adaeb3e778e316558a2aa06913d99ad52856234134b1d2a1f72db5b201faa] <==
	{"level":"info","ts":"2026-01-10T08:53:31.832100Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T08:53:31.832125Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T08:53:31.832195Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2026-01-10T08:53:31.832342Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T08:53:31.832455Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-10T08:53:31.832453Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-10T08:53:31.832503Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-10T08:53:32.716531Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T08:53:32.716580Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T08:53:32.716627Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T08:53:32.716643Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T08:53:32.716665Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T08:53:32.717449Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T08:53:32.717481Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T08:53:32.717503Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2026-01-10T08:53:32.717517Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T08:53:32.718551Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T08:53:32.718551Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:no-preload-095312 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T08:53:32.718570Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T08:53:32.718885Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T08:53:32.718974Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T08:53:32.720331Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T08:53:32.721143Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T08:53:32.724095Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2026-01-10T08:53:32.724172Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 08:54:30 up 37 min,  0 user,  load average: 5.00, 4.15, 2.69
	Linux no-preload-095312 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8cc27bb53ccf039cf66d610eccb576f49bdbb73941b54ab4f3ae15da8ca459c9] <==
	I0110 08:53:34.757275       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 08:53:34.757536       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0110 08:53:34.757679       1 main.go:148] setting mtu 1500 for CNI 
	I0110 08:53:34.757698       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 08:53:34.757717       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T08:53:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 08:53:34.964036       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 08:53:34.964070       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 08:53:34.964111       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 08:53:35.041940       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 08:53:35.542149       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 08:53:35.542188       1 metrics.go:72] Registering metrics
	I0110 08:53:35.542287       1 controller.go:711] "Syncing nftables rules"
	I0110 08:53:45.042530       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 08:53:45.042578       1 main.go:301] handling current node
	I0110 08:53:55.042064       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 08:53:55.042096       1 main.go:301] handling current node
	I0110 08:54:05.042641       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 08:54:05.042690       1 main.go:301] handling current node
	I0110 08:54:15.042069       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 08:54:15.042102       1 main.go:301] handling current node
	I0110 08:54:25.042258       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 08:54:25.042541       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4114f852d4bfbe76e80bef4884aabfe15cc867ff51c6109af0772dba003fc92e] <==
	I0110 08:53:33.912945       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0110 08:53:33.912966       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0110 08:53:33.913314       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0110 08:53:33.913616       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I0110 08:53:33.913658       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0110 08:53:33.913762       1 aggregator.go:187] initial CRD sync complete...
	I0110 08:53:33.913773       1 autoregister_controller.go:144] Starting autoregister controller
	I0110 08:53:33.913779       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0110 08:53:33.913785       1 cache.go:39] Caches are synced for autoregister controller
	I0110 08:53:33.917931       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0110 08:53:33.921898       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E0110 08:53:33.921914       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0110 08:53:33.936469       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 08:53:34.169197       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 08:53:34.199109       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 08:53:34.220607       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 08:53:34.228096       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 08:53:34.235187       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 08:53:34.278276       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.126.187"}
	I0110 08:53:34.298393       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.19.52"}
	I0110 08:53:34.815022       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 08:53:37.585431       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 08:53:37.633288       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 08:53:37.682419       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 08:53:37.682420       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [7b204b358eeadb60a0ffc4d238b32e1f0914014ff649c11231353d088fbfd63e] <==
	I0110 08:53:37.034895       1 range_allocator.go:181] "Starting range CIDR allocator"
	I0110 08:53:37.034902       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:53:37.034908       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:37.034923       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-095312"
	I0110 08:53:37.034965       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0110 08:53:37.035463       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:37.035479       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:37.034366       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:37.035812       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:37.034380       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:37.036541       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:37.036637       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:37.037324       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:37.037655       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:37.037933       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:37.038077       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:37.038688       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:37.035632       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:37.038079       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:37.041047       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:37.046649       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:53:37.134609       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:37.134631       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 08:53:37.134638       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 08:53:37.146995       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [7edc134caaddf41c018c315644cbf965110ff668918e57324178f7efbba5809b] <==
	I0110 08:53:34.643882       1 server_linux.go:53] "Using iptables proxy"
	I0110 08:53:34.713068       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:53:34.813221       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:34.813263       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0110 08:53:34.813364       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 08:53:34.834960       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 08:53:34.835026       1 server_linux.go:136] "Using iptables Proxier"
	I0110 08:53:34.841148       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 08:53:34.841576       1 server.go:529] "Version info" version="v1.35.0"
	I0110 08:53:34.841613       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 08:53:34.843500       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 08:53:34.843541       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 08:53:34.843545       1 config.go:200] "Starting service config controller"
	I0110 08:53:34.843571       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 08:53:34.843610       1 config.go:309] "Starting node config controller"
	I0110 08:53:34.843618       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 08:53:34.843626       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 08:53:34.843806       1 config.go:106] "Starting endpoint slice config controller"
	I0110 08:53:34.843822       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 08:53:34.943967       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 08:53:34.943999       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 08:53:34.944010       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [168230ea09edf78f7dbfce3346ce34e1aecc8ef7d88bf3480f0d898e0a09de74] <==
	I0110 08:53:32.175169       1 serving.go:386] Generated self-signed cert in-memory
	W0110 08:53:33.835361       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0110 08:53:33.835523       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0110 08:53:33.835572       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0110 08:53:33.835582       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0110 08:53:33.879844       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0110 08:53:33.879877       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 08:53:33.882336       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 08:53:33.882373       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:53:33.882492       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0110 08:53:33.883008       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0110 08:53:33.983429       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 08:53:47 no-preload-095312 kubelet[719]: I0110 08:53:47.701432     719 scope.go:122] "RemoveContainer" containerID="53f33e6ab8a015dc2ed1c758e5d4ea04720d91b3be976d94396499a14f4aa4e0"
	Jan 10 08:53:47 no-preload-095312 kubelet[719]: E0110 08:53:47.701654     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-tfl6w_kubernetes-dashboard(94945fef-f4b5-4c57-8686-bdc53440b928)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tfl6w" podUID="94945fef-f4b5-4c57-8686-bdc53440b928"
	Jan 10 08:53:48 no-preload-095312 kubelet[719]: E0110 08:53:48.305013     719 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-095312" containerName="kube-scheduler"
	Jan 10 08:53:49 no-preload-095312 kubelet[719]: E0110 08:53:49.165046     719 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-095312" containerName="kube-controller-manager"
	Jan 10 08:53:51 no-preload-095312 kubelet[719]: E0110 08:53:51.238789     719 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tfl6w" containerName="dashboard-metrics-scraper"
	Jan 10 08:53:51 no-preload-095312 kubelet[719]: I0110 08:53:51.238828     719 scope.go:122] "RemoveContainer" containerID="53f33e6ab8a015dc2ed1c758e5d4ea04720d91b3be976d94396499a14f4aa4e0"
	Jan 10 08:53:52 no-preload-095312 kubelet[719]: I0110 08:53:52.317405     719 scope.go:122] "RemoveContainer" containerID="53f33e6ab8a015dc2ed1c758e5d4ea04720d91b3be976d94396499a14f4aa4e0"
	Jan 10 08:53:52 no-preload-095312 kubelet[719]: E0110 08:53:52.317715     719 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tfl6w" containerName="dashboard-metrics-scraper"
	Jan 10 08:53:52 no-preload-095312 kubelet[719]: I0110 08:53:52.317766     719 scope.go:122] "RemoveContainer" containerID="bb71638d1ffd745e414fef254e141528b61c5b817612e65764b764ecbf2972ad"
	Jan 10 08:53:52 no-preload-095312 kubelet[719]: E0110 08:53:52.317991     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-tfl6w_kubernetes-dashboard(94945fef-f4b5-4c57-8686-bdc53440b928)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tfl6w" podUID="94945fef-f4b5-4c57-8686-bdc53440b928"
	Jan 10 08:53:57 no-preload-095312 kubelet[719]: E0110 08:53:57.702366     719 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tfl6w" containerName="dashboard-metrics-scraper"
	Jan 10 08:53:57 no-preload-095312 kubelet[719]: I0110 08:53:57.702409     719 scope.go:122] "RemoveContainer" containerID="bb71638d1ffd745e414fef254e141528b61c5b817612e65764b764ecbf2972ad"
	Jan 10 08:53:57 no-preload-095312 kubelet[719]: E0110 08:53:57.702623     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-tfl6w_kubernetes-dashboard(94945fef-f4b5-4c57-8686-bdc53440b928)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tfl6w" podUID="94945fef-f4b5-4c57-8686-bdc53440b928"
	Jan 10 08:54:05 no-preload-095312 kubelet[719]: I0110 08:54:05.353822     719 scope.go:122] "RemoveContainer" containerID="6642492d74528ed7ea62a82a2d1e91979d058b03e408a23c08e5341c8aef7bcf"
	Jan 10 08:54:12 no-preload-095312 kubelet[719]: E0110 08:54:12.044894     719 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wpsnn" containerName="coredns"
	Jan 10 08:54:18 no-preload-095312 kubelet[719]: E0110 08:54:18.238620     719 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tfl6w" containerName="dashboard-metrics-scraper"
	Jan 10 08:54:18 no-preload-095312 kubelet[719]: I0110 08:54:18.238660     719 scope.go:122] "RemoveContainer" containerID="bb71638d1ffd745e414fef254e141528b61c5b817612e65764b764ecbf2972ad"
	Jan 10 08:54:18 no-preload-095312 kubelet[719]: I0110 08:54:18.390772     719 scope.go:122] "RemoveContainer" containerID="bb71638d1ffd745e414fef254e141528b61c5b817612e65764b764ecbf2972ad"
	Jan 10 08:54:18 no-preload-095312 kubelet[719]: E0110 08:54:18.390979     719 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tfl6w" containerName="dashboard-metrics-scraper"
	Jan 10 08:54:18 no-preload-095312 kubelet[719]: I0110 08:54:18.391014     719 scope.go:122] "RemoveContainer" containerID="0bb140e695b05dc0360d1bfe69740a554acd68ace81fd737c1eef5fb1cd3c050"
	Jan 10 08:54:18 no-preload-095312 kubelet[719]: E0110 08:54:18.391204     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-tfl6w_kubernetes-dashboard(94945fef-f4b5-4c57-8686-bdc53440b928)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tfl6w" podUID="94945fef-f4b5-4c57-8686-bdc53440b928"
	Jan 10 08:54:26 no-preload-095312 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 08:54:26 no-preload-095312 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 08:54:26 no-preload-095312 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 08:54:26 no-preload-095312 systemd[1]: kubelet.service: Consumed 1.812s CPU time.
	
	
	==> kubernetes-dashboard [f5a7c1082094a1bd85fd4d5d1b52374cfa4eb365f89b559cac9994c89013f73c] <==
	2026/01/10 08:53:43 Starting overwatch
	2026/01/10 08:53:43 Using namespace: kubernetes-dashboard
	2026/01/10 08:53:43 Using in-cluster config to connect to apiserver
	2026/01/10 08:53:43 Using secret token for csrf signing
	2026/01/10 08:53:43 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/10 08:53:43 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/10 08:53:43 Successful initial request to the apiserver, version: v1.35.0
	2026/01/10 08:53:43 Generating JWE encryption key
	2026/01/10 08:53:43 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/10 08:53:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/10 08:53:43 Initializing JWE encryption key from synchronized object
	2026/01/10 08:53:43 Creating in-cluster Sidecar client
	2026/01/10 08:53:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 08:53:43 Serving insecurely on HTTP port: 9090
	2026/01/10 08:54:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [6642492d74528ed7ea62a82a2d1e91979d058b03e408a23c08e5341c8aef7bcf] <==
	I0110 08:53:34.619369       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0110 08:54:04.621938       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a3296b381169a6ddfab7afc7323a600112a626b64f20e93c01fd3bef1f408360] <==
	I0110 08:54:05.428941       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 08:54:05.439280       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 08:54:05.439337       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0110 08:54:05.442135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:08.897516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:13.157746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:16.756331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:19.810018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:22.832811       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:22.837501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 08:54:22.837681       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 08:54:22.837847       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2d7883ff-6c30-48ff-9e3a-f260577e9c48", APIVersion:"v1", ResourceVersion:"645", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-095312_21c5124b-6449-4159-a883-8db1ae51193e became leader
	I0110 08:54:22.837910       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-095312_21c5124b-6449-4159-a883-8db1ae51193e!
	W0110 08:54:22.841159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:22.844515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 08:54:22.938262       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-095312_21c5124b-6449-4159-a883-8db1ae51193e!
	W0110 08:54:24.848012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:24.853953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:26.858102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:26.922002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:28.925979       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:28.931632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-095312 -n no-preload-095312
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-095312 -n no-preload-095312: exit status 2 (372.624693ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-095312 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-095312
helpers_test.go:244: (dbg) docker inspect no-preload-095312:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b55d6d4fd1b202097df2085e713e014ab91340931a47b47e91685006d96552db",
	        "Created": "2026-01-10T08:52:10.613870109Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 314084,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T08:53:24.702221053Z",
	            "FinishedAt": "2026-01-10T08:53:23.781642135Z"
	        },
	        "Image": "sha256:e8ee619afcf8e9008c4fe3e7f2aba5fdbe7a9f0765053ca7dd53ab0df8ab02a5",
	        "ResolvConfPath": "/var/lib/docker/containers/b55d6d4fd1b202097df2085e713e014ab91340931a47b47e91685006d96552db/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b55d6d4fd1b202097df2085e713e014ab91340931a47b47e91685006d96552db/hostname",
	        "HostsPath": "/var/lib/docker/containers/b55d6d4fd1b202097df2085e713e014ab91340931a47b47e91685006d96552db/hosts",
	        "LogPath": "/var/lib/docker/containers/b55d6d4fd1b202097df2085e713e014ab91340931a47b47e91685006d96552db/b55d6d4fd1b202097df2085e713e014ab91340931a47b47e91685006d96552db-json.log",
	        "Name": "/no-preload-095312",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-095312:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-095312",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b55d6d4fd1b202097df2085e713e014ab91340931a47b47e91685006d96552db",
	                "LowerDir": "/var/lib/docker/overlay2/564ca6ef3c40a4c5b327dca6e24a3966439d268b426f5aa657f7c665c9e2702e-init/diff:/var/lib/docker/overlay2/97b14eb520192356c991915ce74cd6094e6ef948f398ac688b667ea18491ccde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/564ca6ef3c40a4c5b327dca6e24a3966439d268b426f5aa657f7c665c9e2702e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/564ca6ef3c40a4c5b327dca6e24a3966439d268b426f5aa657f7c665c9e2702e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/564ca6ef3c40a4c5b327dca6e24a3966439d268b426f5aa657f7c665c9e2702e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-095312",
	                "Source": "/var/lib/docker/volumes/no-preload-095312/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-095312",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-095312",
	                "name.minikube.sigs.k8s.io": "no-preload-095312",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "801ad71b8ddab750772fe0aa9298dc4995797a2611d5dc3b22dfe0bdb075a0d6",
	            "SandboxKey": "/var/run/docker/netns/801ad71b8dda",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-095312": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6f7b848a3ceb686079b5c418ee8c05b8e3ebe9353b6cbb1033bc657f18ffab5a",
	                    "EndpointID": "6dc3c8f2003ecc000806eb2171fdc465b3e7743d5483de7fc091c5914b9ccbeb",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "6e:6f:25:c7:c0:89",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-095312",
	                        "b55d6d4fd1b2"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-095312 -n no-preload-095312
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-095312 -n no-preload-095312: exit status 2 (355.397526ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-095312 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-095312 logs -n 25: (1.247310046s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p disable-driver-mounts-847921                                                                                                                                                                                                               │ disable-driver-mounts-847921 │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ start   │ -p default-k8s-diff-port-225354 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-225354 │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-093083 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-093083       │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-095312 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-095312            │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ stop    │ -p old-k8s-version-093083 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-093083       │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ stop    │ -p no-preload-095312 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-095312            │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-093083 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-093083       │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ start   │ -p old-k8s-version-093083 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-093083       │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:54 UTC │
	│ addons  │ enable dashboard -p no-preload-095312 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-095312            │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ start   │ -p no-preload-095312 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                       │ no-preload-095312            │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:54 UTC │
	│ addons  │ enable metrics-server -p embed-certs-072273 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-072273           │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ stop    │ -p embed-certs-072273 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-072273           │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ addons  │ enable dashboard -p embed-certs-072273 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-072273           │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ start   │ -p embed-certs-072273 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-072273           │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-225354 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-225354 │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-225354 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-225354 │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:54 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-225354 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-225354 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p default-k8s-diff-port-225354 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-225354 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ image   │ old-k8s-version-093083 image list --format=json                                                                                                                                                                                               │ old-k8s-version-093083       │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ pause   │ -p old-k8s-version-093083 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-093083       │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ delete  │ -p old-k8s-version-093083                                                                                                                                                                                                                     │ old-k8s-version-093083       │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ image   │ no-preload-095312 image list --format=json                                                                                                                                                                                                    │ no-preload-095312            │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ pause   │ -p no-preload-095312 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-095312            │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ delete  │ -p old-k8s-version-093083                                                                                                                                                                                                                     │ old-k8s-version-093083       │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p newest-cni-582650 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-582650            │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 08:54:29
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 08:54:29.113285  328774 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:54:29.113595  328774 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:54:29.113608  328774 out.go:374] Setting ErrFile to fd 2...
	I0110 08:54:29.113615  328774 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:54:29.113969  328774 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:54:29.114652  328774 out.go:368] Setting JSON to false
	I0110 08:54:29.116443  328774 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2221,"bootTime":1768033048,"procs":374,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 08:54:29.116528  328774 start.go:143] virtualization: kvm guest
	I0110 08:54:29.119440  328774 out.go:179] * [newest-cni-582650] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 08:54:29.121042  328774 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 08:54:29.121141  328774 notify.go:221] Checking for updates...
	I0110 08:54:29.123762  328774 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 08:54:29.125238  328774 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:54:29.126553  328774 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-3641/.minikube
	I0110 08:54:29.128494  328774 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 08:54:29.129960  328774 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 08:54:29.132238  328774 config.go:182] Loaded profile config "default-k8s-diff-port-225354": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:54:29.132396  328774 config.go:182] Loaded profile config "embed-certs-072273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:54:29.132531  328774 config.go:182] Loaded profile config "no-preload-095312": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:54:29.132641  328774 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 08:54:29.170068  328774 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 08:54:29.170278  328774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:54:29.249152  328774 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2026-01-10 08:54:29.236039336 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:54:29.249302  328774 docker.go:319] overlay module found
	I0110 08:54:29.252878  328774 out.go:179] * Using the docker driver based on user configuration
	I0110 08:54:29.254217  328774 start.go:309] selected driver: docker
	I0110 08:54:29.254239  328774 start.go:928] validating driver "docker" against <nil>
	I0110 08:54:29.254253  328774 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 08:54:29.255103  328774 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:54:29.330401  328774 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2026-01-10 08:54:29.318571952 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:54:29.330601  328774 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	W0110 08:54:29.330655  328774 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0110 08:54:29.330990  328774 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0110 08:54:29.333978  328774 out.go:179] * Using Docker driver with root privileges
	I0110 08:54:29.335193  328774 cni.go:84] Creating CNI manager for ""
	I0110 08:54:29.335289  328774 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 08:54:29.335304  328774 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 08:54:29.335391  328774 start.go:353] cluster config:
	{Name:newest-cni-582650 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-582650 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:54:29.337113  328774 out.go:179] * Starting "newest-cni-582650" primary control-plane node in "newest-cni-582650" cluster
	I0110 08:54:29.338495  328774 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 08:54:29.339848  328774 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 08:54:29.340973  328774 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 08:54:29.341041  328774 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 08:54:29.341139  328774 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0110 08:54:29.341158  328774 cache.go:65] Caching tarball of preloaded images
	I0110 08:54:29.341259  328774 preload.go:251] Found /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0110 08:54:29.341276  328774 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 08:54:29.341399  328774 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650/config.json ...
	I0110 08:54:29.341422  328774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650/config.json: {Name:mkef83f36c8b219909f04c3dd59895beb38ec3cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:54:29.368687  328774 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 08:54:29.368752  328774 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 08:54:29.368774  328774 cache.go:243] Successfully downloaded all kic artifacts
	I0110 08:54:29.368810  328774 start.go:360] acquireMachinesLock for newest-cni-582650: {Name:mk8a366cb6a19cf5fbfd56cf9cfee17123f828e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 08:54:29.368956  328774 start.go:364] duration metric: took 118.339µs to acquireMachinesLock for "newest-cni-582650"
	I0110 08:54:29.368984  328774 start.go:93] Provisioning new machine with config: &{Name:newest-cni-582650 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-582650 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 08:54:29.369038  328774 start.go:125] createHost starting for "" (driver="docker")
	W0110 08:54:27.649455  323767 pod_ready.go:104] pod "coredns-7d764666f9-cjklg" is not "Ready", error: <nil>
	W0110 08:54:29.651654  323767 pod_ready.go:104] pod "coredns-7d764666f9-cjklg" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Jan 10 08:53:51 no-preload-095312 crio[574]: time="2026-01-10T08:53:51.320100864Z" level=info msg="Started container" PID=1783 containerID=bb71638d1ffd745e414fef254e141528b61c5b817612e65764b764ecbf2972ad description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tfl6w/dashboard-metrics-scraper id=84763b24-7a38-4e2d-943b-24c5f04ce3c6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f506d85cee64f2ab636c809387ed6860fbe55e80b22ff518d240de6ccea90034
	Jan 10 08:53:52 no-preload-095312 crio[574]: time="2026-01-10T08:53:52.318773683Z" level=info msg="Removing container: 53f33e6ab8a015dc2ed1c758e5d4ea04720d91b3be976d94396499a14f4aa4e0" id=138783fa-bab7-466d-8ffc-aa1fa7d6fb27 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 08:53:52 no-preload-095312 crio[574]: time="2026-01-10T08:53:52.329358323Z" level=info msg="Removed container 53f33e6ab8a015dc2ed1c758e5d4ea04720d91b3be976d94396499a14f4aa4e0: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tfl6w/dashboard-metrics-scraper" id=138783fa-bab7-466d-8ffc-aa1fa7d6fb27 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 08:54:05 no-preload-095312 crio[574]: time="2026-01-10T08:54:05.35430583Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=07c23373-92eb-4aa2-80e6-3cd54e8c2f1b name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:54:05 no-preload-095312 crio[574]: time="2026-01-10T08:54:05.355478797Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=fc399804-8326-4196-bff1-9d41867813bb name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:54:05 no-preload-095312 crio[574]: time="2026-01-10T08:54:05.356580047Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=5346da91-25d7-4db8-aa32-b5036fc55378 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:54:05 no-preload-095312 crio[574]: time="2026-01-10T08:54:05.356720534Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:05 no-preload-095312 crio[574]: time="2026-01-10T08:54:05.363158569Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:05 no-preload-095312 crio[574]: time="2026-01-10T08:54:05.363364951Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/126d3343e28d3efee29eeb6d7828e780d71931b6cf449cd44893ddc7cf277c89/merged/etc/passwd: no such file or directory"
	Jan 10 08:54:05 no-preload-095312 crio[574]: time="2026-01-10T08:54:05.363395904Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/126d3343e28d3efee29eeb6d7828e780d71931b6cf449cd44893ddc7cf277c89/merged/etc/group: no such file or directory"
	Jan 10 08:54:05 no-preload-095312 crio[574]: time="2026-01-10T08:54:05.364088114Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:05 no-preload-095312 crio[574]: time="2026-01-10T08:54:05.408351789Z" level=info msg="Created container a3296b381169a6ddfab7afc7323a600112a626b64f20e93c01fd3bef1f408360: kube-system/storage-provisioner/storage-provisioner" id=5346da91-25d7-4db8-aa32-b5036fc55378 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:54:05 no-preload-095312 crio[574]: time="2026-01-10T08:54:05.409087558Z" level=info msg="Starting container: a3296b381169a6ddfab7afc7323a600112a626b64f20e93c01fd3bef1f408360" id=0c33e100-e5c4-479a-90cb-fd19a714aca2 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 08:54:05 no-preload-095312 crio[574]: time="2026-01-10T08:54:05.41140952Z" level=info msg="Started container" PID=1797 containerID=a3296b381169a6ddfab7afc7323a600112a626b64f20e93c01fd3bef1f408360 description=kube-system/storage-provisioner/storage-provisioner id=0c33e100-e5c4-479a-90cb-fd19a714aca2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e3ef327a1cc3409e4f43ebe44ac5cfc5dc24a8ed6018e4856a555462c65479e3
	Jan 10 08:54:18 no-preload-095312 crio[574]: time="2026-01-10T08:54:18.239244007Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f04119fc-d714-465f-9f0c-6559ca89b71c name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:54:18 no-preload-095312 crio[574]: time="2026-01-10T08:54:18.240271924Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2ed34086-4a7b-4968-a4f6-29850d04f78f name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:54:18 no-preload-095312 crio[574]: time="2026-01-10T08:54:18.241299274Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tfl6w/dashboard-metrics-scraper" id=d596375c-7269-47c0-bc1e-0360bc121ff1 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:54:18 no-preload-095312 crio[574]: time="2026-01-10T08:54:18.24143722Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:18 no-preload-095312 crio[574]: time="2026-01-10T08:54:18.24771949Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:18 no-preload-095312 crio[574]: time="2026-01-10T08:54:18.2484449Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:18 no-preload-095312 crio[574]: time="2026-01-10T08:54:18.279587708Z" level=info msg="Created container 0bb140e695b05dc0360d1bfe69740a554acd68ace81fd737c1eef5fb1cd3c050: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tfl6w/dashboard-metrics-scraper" id=d596375c-7269-47c0-bc1e-0360bc121ff1 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:54:18 no-preload-095312 crio[574]: time="2026-01-10T08:54:18.280285775Z" level=info msg="Starting container: 0bb140e695b05dc0360d1bfe69740a554acd68ace81fd737c1eef5fb1cd3c050" id=95d2b712-080d-43c0-a767-67cf1bee05a3 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 08:54:18 no-preload-095312 crio[574]: time="2026-01-10T08:54:18.282098422Z" level=info msg="Started container" PID=1833 containerID=0bb140e695b05dc0360d1bfe69740a554acd68ace81fd737c1eef5fb1cd3c050 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tfl6w/dashboard-metrics-scraper id=95d2b712-080d-43c0-a767-67cf1bee05a3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f506d85cee64f2ab636c809387ed6860fbe55e80b22ff518d240de6ccea90034
	Jan 10 08:54:18 no-preload-095312 crio[574]: time="2026-01-10T08:54:18.392133829Z" level=info msg="Removing container: bb71638d1ffd745e414fef254e141528b61c5b817612e65764b764ecbf2972ad" id=f8d750b5-04b4-4e09-9179-783700a0c964 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 08:54:18 no-preload-095312 crio[574]: time="2026-01-10T08:54:18.402210935Z" level=info msg="Removed container bb71638d1ffd745e414fef254e141528b61c5b817612e65764b764ecbf2972ad: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tfl6w/dashboard-metrics-scraper" id=f8d750b5-04b4-4e09-9179-783700a0c964 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	0bb140e695b05       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           13 seconds ago       Exited              dashboard-metrics-scraper   3                   f506d85cee64f       dashboard-metrics-scraper-867fb5f87b-tfl6w   kubernetes-dashboard
	a3296b381169a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           26 seconds ago       Running             storage-provisioner         1                   e3ef327a1cc34       storage-provisioner                          kube-system
	f5a7c1082094a       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   48 seconds ago       Running             kubernetes-dashboard        0                   feefa01e2baac       kubernetes-dashboard-b84665fb8-pjbvx         kubernetes-dashboard
	8cd0cf6025ad2       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           57 seconds ago       Running             busybox                     1                   0459af62d695a       busybox                                      default
	3518fca86e684       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           57 seconds ago       Running             coredns                     0                   05c600782658d       coredns-7d764666f9-wpsnn                     kube-system
	6642492d74528       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           57 seconds ago       Exited              storage-provisioner         0                   e3ef327a1cc34       storage-provisioner                          kube-system
	8cc27bb53ccf0       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           57 seconds ago       Running             kindnet-cni                 0                   bc1a5427bb6dc       kindnet-tzmwv                                kube-system
	7edc134caaddf       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           57 seconds ago       Running             kube-proxy                  0                   716fac7e1cb28       kube-proxy-vrzf6                             kube-system
	4114f852d4bfb       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           About a minute ago   Running             kube-apiserver              0                   e1543c35c1417       kube-apiserver-no-preload-095312             kube-system
	168230ea09edf       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           About a minute ago   Running             kube-scheduler              0                   c968b731a63ab       kube-scheduler-no-preload-095312             kube-system
	360adaeb3e778       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           About a minute ago   Running             etcd                        0                   d4edd7936eb4e       etcd-no-preload-095312                       kube-system
	7b204b358eead       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           About a minute ago   Running             kube-controller-manager     0                   6904609f6785f       kube-controller-manager-no-preload-095312    kube-system
	
	
	==> coredns [3518fca86e68415819fef31a883c95cdfc0166747df2ea66ad4b89d8b5add329] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:59916 - 37642 "HINFO IN 1664412536841815701.8301507622720825459. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020738611s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-095312
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-095312
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee
	                    minikube.k8s.io/name=no-preload-095312
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T08_52_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 08:52:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-095312
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 08:54:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 08:54:04 +0000   Sat, 10 Jan 2026 08:52:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 08:54:04 +0000   Sat, 10 Jan 2026 08:52:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 08:54:04 +0000   Sat, 10 Jan 2026 08:52:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 08:54:04 +0000   Sat, 10 Jan 2026 08:52:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-095312
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 73d0d7ed7bc783a051b563196960b035
	  System UUID:                b2e47543-110f-4155-be9c-62c4fc9e6c69
	  Boot ID:                    7c12f492-59b6-440b-986c-741ece916a23
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-7d764666f9-wpsnn                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     113s
	  kube-system                 etcd-no-preload-095312                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         119s
	  kube-system                 kindnet-tzmwv                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      113s
	  kube-system                 kube-apiserver-no-preload-095312              250m (3%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-controller-manager-no-preload-095312     200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-vrzf6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-no-preload-095312              100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-tfl6w    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-pjbvx          0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  115s  node-controller  Node no-preload-095312 event: Registered Node no-preload-095312 in Controller
	  Normal  RegisteredNode  55s   node-controller  Node no-preload-095312 event: Registered Node no-preload-095312 in Controller
	
	
	==> dmesg <==
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 4a 03 46 88 66 4e 08 06
	[  +6.406566] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7a 4f c9 a1 45 f4 08 06
	[  +0.001429] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca 5b 5b a9 7c eb 08 06
	[ +24.002020] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7a 6f 49 7e 9d d1 08 06
	[  +0.000366] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e 3a ac 18 98 c9 08 06
	[Jan10 08:52] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4a 05 5c 86 cf df 08 06
	[  +0.000337] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 7a 4f c9 a1 45 f4 08 06
	[  +0.000602] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca 5b 5b a9 7c eb 08 06
	[  +0.739059] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e 32 81 dc 0a 7c 08 06
	[  +1.988700] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 6e ec e4 aa 3d 08 06
	[ +20.040962] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 ac f8 cf fb 84 08 06
	[  +0.000375] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 6e ec e4 aa 3d 08 06
	[  +0.000915] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 32 81 dc 0a 7c 08 06
	
	
	==> etcd [360adaeb3e778e316558a2aa06913d99ad52856234134b1d2a1f72db5b201faa] <==
	{"level":"info","ts":"2026-01-10T08:53:31.832100Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T08:53:31.832125Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T08:53:31.832195Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2026-01-10T08:53:31.832342Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T08:53:31.832455Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-10T08:53:31.832453Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-10T08:53:31.832503Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-10T08:53:32.716531Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T08:53:32.716580Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T08:53:32.716627Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2026-01-10T08:53:32.716643Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T08:53:32.716665Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T08:53:32.717449Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T08:53:32.717481Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"ea7e25599daad906 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T08:53:32.717503Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2026-01-10T08:53:32.717517Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2026-01-10T08:53:32.718551Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T08:53:32.718551Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:no-preload-095312 ClientURLs:[https://192.168.76.2:2379]}","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T08:53:32.718570Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T08:53:32.718885Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T08:53:32.718974Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T08:53:32.720331Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T08:53:32.721143Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T08:53:32.724095Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2026-01-10T08:53:32.724172Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 08:54:32 up 37 min,  0 user,  load average: 5.00, 4.15, 2.69
	Linux no-preload-095312 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8cc27bb53ccf039cf66d610eccb576f49bdbb73941b54ab4f3ae15da8ca459c9] <==
	I0110 08:53:34.757275       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 08:53:34.757536       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I0110 08:53:34.757679       1 main.go:148] setting mtu 1500 for CNI 
	I0110 08:53:34.757698       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 08:53:34.757717       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T08:53:34Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 08:53:34.964036       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 08:53:34.964070       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 08:53:34.964111       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 08:53:35.041940       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 08:53:35.542149       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 08:53:35.542188       1 metrics.go:72] Registering metrics
	I0110 08:53:35.542287       1 controller.go:711] "Syncing nftables rules"
	I0110 08:53:45.042530       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 08:53:45.042578       1 main.go:301] handling current node
	I0110 08:53:55.042064       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 08:53:55.042096       1 main.go:301] handling current node
	I0110 08:54:05.042641       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 08:54:05.042690       1 main.go:301] handling current node
	I0110 08:54:15.042069       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 08:54:15.042102       1 main.go:301] handling current node
	I0110 08:54:25.042258       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I0110 08:54:25.042541       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4114f852d4bfbe76e80bef4884aabfe15cc867ff51c6109af0772dba003fc92e] <==
	I0110 08:53:33.912945       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0110 08:53:33.912966       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0110 08:53:33.913314       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0110 08:53:33.913616       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I0110 08:53:33.913658       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0110 08:53:33.913762       1 aggregator.go:187] initial CRD sync complete...
	I0110 08:53:33.913773       1 autoregister_controller.go:144] Starting autoregister controller
	I0110 08:53:33.913779       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0110 08:53:33.913785       1 cache.go:39] Caches are synced for autoregister controller
	I0110 08:53:33.917931       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0110 08:53:33.921898       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E0110 08:53:33.921914       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0110 08:53:33.936469       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 08:53:34.169197       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 08:53:34.199109       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 08:53:34.220607       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 08:53:34.228096       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 08:53:34.235187       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 08:53:34.278276       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.126.187"}
	I0110 08:53:34.298393       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.19.52"}
	I0110 08:53:34.815022       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 08:53:37.585431       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 08:53:37.633288       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 08:53:37.682419       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 08:53:37.682420       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [7b204b358eeadb60a0ffc4d238b32e1f0914014ff649c11231353d088fbfd63e] <==
	I0110 08:53:37.034895       1 range_allocator.go:181] "Starting range CIDR allocator"
	I0110 08:53:37.034902       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:53:37.034908       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:37.034923       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-095312"
	I0110 08:53:37.034965       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0110 08:53:37.035463       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:37.035479       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:37.034366       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:37.035812       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:37.034380       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:37.036541       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:37.036637       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:37.037324       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:37.037655       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:37.037933       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:37.038077       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:37.038688       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:37.035632       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:37.038079       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:37.041047       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:37.046649       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:53:37.134609       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:37.134631       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 08:53:37.134638       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 08:53:37.146995       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [7edc134caaddf41c018c315644cbf965110ff668918e57324178f7efbba5809b] <==
	I0110 08:53:34.643882       1 server_linux.go:53] "Using iptables proxy"
	I0110 08:53:34.713068       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:53:34.813221       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:34.813263       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0110 08:53:34.813364       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 08:53:34.834960       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 08:53:34.835026       1 server_linux.go:136] "Using iptables Proxier"
	I0110 08:53:34.841148       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 08:53:34.841576       1 server.go:529] "Version info" version="v1.35.0"
	I0110 08:53:34.841613       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 08:53:34.843500       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 08:53:34.843541       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 08:53:34.843545       1 config.go:200] "Starting service config controller"
	I0110 08:53:34.843571       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 08:53:34.843610       1 config.go:309] "Starting node config controller"
	I0110 08:53:34.843618       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 08:53:34.843626       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 08:53:34.843806       1 config.go:106] "Starting endpoint slice config controller"
	I0110 08:53:34.843822       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 08:53:34.943967       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 08:53:34.943999       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 08:53:34.944010       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [168230ea09edf78f7dbfce3346ce34e1aecc8ef7d88bf3480f0d898e0a09de74] <==
	I0110 08:53:32.175169       1 serving.go:386] Generated self-signed cert in-memory
	W0110 08:53:33.835361       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0110 08:53:33.835523       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0110 08:53:33.835572       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0110 08:53:33.835582       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0110 08:53:33.879844       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0110 08:53:33.879877       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 08:53:33.882336       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 08:53:33.882373       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:53:33.882492       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0110 08:53:33.883008       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0110 08:53:33.983429       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 08:53:47 no-preload-095312 kubelet[719]: I0110 08:53:47.701432     719 scope.go:122] "RemoveContainer" containerID="53f33e6ab8a015dc2ed1c758e5d4ea04720d91b3be976d94396499a14f4aa4e0"
	Jan 10 08:53:47 no-preload-095312 kubelet[719]: E0110 08:53:47.701654     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-tfl6w_kubernetes-dashboard(94945fef-f4b5-4c57-8686-bdc53440b928)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tfl6w" podUID="94945fef-f4b5-4c57-8686-bdc53440b928"
	Jan 10 08:53:48 no-preload-095312 kubelet[719]: E0110 08:53:48.305013     719 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-095312" containerName="kube-scheduler"
	Jan 10 08:53:49 no-preload-095312 kubelet[719]: E0110 08:53:49.165046     719 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-095312" containerName="kube-controller-manager"
	Jan 10 08:53:51 no-preload-095312 kubelet[719]: E0110 08:53:51.238789     719 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tfl6w" containerName="dashboard-metrics-scraper"
	Jan 10 08:53:51 no-preload-095312 kubelet[719]: I0110 08:53:51.238828     719 scope.go:122] "RemoveContainer" containerID="53f33e6ab8a015dc2ed1c758e5d4ea04720d91b3be976d94396499a14f4aa4e0"
	Jan 10 08:53:52 no-preload-095312 kubelet[719]: I0110 08:53:52.317405     719 scope.go:122] "RemoveContainer" containerID="53f33e6ab8a015dc2ed1c758e5d4ea04720d91b3be976d94396499a14f4aa4e0"
	Jan 10 08:53:52 no-preload-095312 kubelet[719]: E0110 08:53:52.317715     719 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tfl6w" containerName="dashboard-metrics-scraper"
	Jan 10 08:53:52 no-preload-095312 kubelet[719]: I0110 08:53:52.317766     719 scope.go:122] "RemoveContainer" containerID="bb71638d1ffd745e414fef254e141528b61c5b817612e65764b764ecbf2972ad"
	Jan 10 08:53:52 no-preload-095312 kubelet[719]: E0110 08:53:52.317991     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-tfl6w_kubernetes-dashboard(94945fef-f4b5-4c57-8686-bdc53440b928)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tfl6w" podUID="94945fef-f4b5-4c57-8686-bdc53440b928"
	Jan 10 08:53:57 no-preload-095312 kubelet[719]: E0110 08:53:57.702366     719 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tfl6w" containerName="dashboard-metrics-scraper"
	Jan 10 08:53:57 no-preload-095312 kubelet[719]: I0110 08:53:57.702409     719 scope.go:122] "RemoveContainer" containerID="bb71638d1ffd745e414fef254e141528b61c5b817612e65764b764ecbf2972ad"
	Jan 10 08:53:57 no-preload-095312 kubelet[719]: E0110 08:53:57.702623     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-tfl6w_kubernetes-dashboard(94945fef-f4b5-4c57-8686-bdc53440b928)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tfl6w" podUID="94945fef-f4b5-4c57-8686-bdc53440b928"
	Jan 10 08:54:05 no-preload-095312 kubelet[719]: I0110 08:54:05.353822     719 scope.go:122] "RemoveContainer" containerID="6642492d74528ed7ea62a82a2d1e91979d058b03e408a23c08e5341c8aef7bcf"
	Jan 10 08:54:12 no-preload-095312 kubelet[719]: E0110 08:54:12.044894     719 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-wpsnn" containerName="coredns"
	Jan 10 08:54:18 no-preload-095312 kubelet[719]: E0110 08:54:18.238620     719 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tfl6w" containerName="dashboard-metrics-scraper"
	Jan 10 08:54:18 no-preload-095312 kubelet[719]: I0110 08:54:18.238660     719 scope.go:122] "RemoveContainer" containerID="bb71638d1ffd745e414fef254e141528b61c5b817612e65764b764ecbf2972ad"
	Jan 10 08:54:18 no-preload-095312 kubelet[719]: I0110 08:54:18.390772     719 scope.go:122] "RemoveContainer" containerID="bb71638d1ffd745e414fef254e141528b61c5b817612e65764b764ecbf2972ad"
	Jan 10 08:54:18 no-preload-095312 kubelet[719]: E0110 08:54:18.390979     719 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tfl6w" containerName="dashboard-metrics-scraper"
	Jan 10 08:54:18 no-preload-095312 kubelet[719]: I0110 08:54:18.391014     719 scope.go:122] "RemoveContainer" containerID="0bb140e695b05dc0360d1bfe69740a554acd68ace81fd737c1eef5fb1cd3c050"
	Jan 10 08:54:18 no-preload-095312 kubelet[719]: E0110 08:54:18.391204     719 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-tfl6w_kubernetes-dashboard(94945fef-f4b5-4c57-8686-bdc53440b928)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-tfl6w" podUID="94945fef-f4b5-4c57-8686-bdc53440b928"
	Jan 10 08:54:26 no-preload-095312 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 08:54:26 no-preload-095312 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 08:54:26 no-preload-095312 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 08:54:26 no-preload-095312 systemd[1]: kubelet.service: Consumed 1.812s CPU time.
	
	
	==> kubernetes-dashboard [f5a7c1082094a1bd85fd4d5d1b52374cfa4eb365f89b559cac9994c89013f73c] <==
	2026/01/10 08:53:43 Starting overwatch
	2026/01/10 08:53:43 Using namespace: kubernetes-dashboard
	2026/01/10 08:53:43 Using in-cluster config to connect to apiserver
	2026/01/10 08:53:43 Using secret token for csrf signing
	2026/01/10 08:53:43 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/10 08:53:43 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/10 08:53:43 Successful initial request to the apiserver, version: v1.35.0
	2026/01/10 08:53:43 Generating JWE encryption key
	2026/01/10 08:53:43 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/10 08:53:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/10 08:53:43 Initializing JWE encryption key from synchronized object
	2026/01/10 08:53:43 Creating in-cluster Sidecar client
	2026/01/10 08:53:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 08:53:43 Serving insecurely on HTTP port: 9090
	2026/01/10 08:54:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [6642492d74528ed7ea62a82a2d1e91979d058b03e408a23c08e5341c8aef7bcf] <==
	I0110 08:53:34.619369       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0110 08:54:04.621938       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a3296b381169a6ddfab7afc7323a600112a626b64f20e93c01fd3bef1f408360] <==
	I0110 08:54:05.428941       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 08:54:05.439280       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 08:54:05.439337       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0110 08:54:05.442135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:08.897516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:13.157746       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:16.756331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:19.810018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:22.832811       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:22.837501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 08:54:22.837681       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 08:54:22.837847       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2d7883ff-6c30-48ff-9e3a-f260577e9c48", APIVersion:"v1", ResourceVersion:"645", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-095312_21c5124b-6449-4159-a883-8db1ae51193e became leader
	I0110 08:54:22.837910       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-095312_21c5124b-6449-4159-a883-8db1ae51193e!
	W0110 08:54:22.841159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:22.844515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 08:54:22.938262       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-095312_21c5124b-6449-4159-a883-8db1ae51193e!
	W0110 08:54:24.848012       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:24.853953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:26.858102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:26.922002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:28.925979       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:28.931632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:30.935284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:30.940417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-095312 -n no-preload-095312
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-095312 -n no-preload-095312: exit status 2 (369.755253ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-095312 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (7.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-072273 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-072273 --alsologtostderr -v=1: exit status 80 (2.450036051s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-072273 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:54:43.524615  333156 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:54:43.524701  333156 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:54:43.524710  333156 out.go:374] Setting ErrFile to fd 2...
	I0110 08:54:43.524714  333156 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:54:43.525009  333156 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:54:43.525257  333156 out.go:368] Setting JSON to false
	I0110 08:54:43.525276  333156 mustload.go:66] Loading cluster: embed-certs-072273
	I0110 08:54:43.525795  333156 config.go:182] Loaded profile config "embed-certs-072273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:54:43.526373  333156 cli_runner.go:164] Run: docker container inspect embed-certs-072273 --format={{.State.Status}}
	I0110 08:54:43.548005  333156 host.go:66] Checking if "embed-certs-072273" exists ...
	I0110 08:54:43.548275  333156 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:54:43.611375  333156 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2026-01-10 08:54:43.600252155 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:54:43.612167  333156 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22376/minikube-v1.37.0-1767438792-22376-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1767438792-22376/minikube-v1.37.0-1767438792-22376-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1767438792-22376-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:embed-certs-072273 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(boo
l=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0110 08:54:43.614032  333156 out.go:179] * Pausing node embed-certs-072273 ... 
	I0110 08:54:43.615505  333156 host.go:66] Checking if "embed-certs-072273" exists ...
	I0110 08:54:43.615844  333156 ssh_runner.go:195] Run: systemctl --version
	I0110 08:54:43.615908  333156 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-072273
	I0110 08:54:43.637669  333156 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/embed-certs-072273/id_rsa Username:docker}
	I0110 08:54:43.742132  333156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:54:43.757055  333156 pause.go:52] kubelet running: true
	I0110 08:54:43.757116  333156 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 08:54:43.917606  333156 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 08:54:43.917723  333156 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 08:54:43.987434  333156 cri.go:96] found id: "aa0ac14c84306d08705b88254a7251d7ce9bb604fb21d63d3bc416ef60ad94aa"
	I0110 08:54:43.987460  333156 cri.go:96] found id: "9632afd09c0841f14d022b7df47bb8ddfded74a6e03714556a038ed3f8c03465"
	I0110 08:54:43.987465  333156 cri.go:96] found id: "393f8485c986054373bba92fd24e2b7d56b9d48329156c2e815c9024cb5c612d"
	I0110 08:54:43.987470  333156 cri.go:96] found id: "6e898f22f40fc95e7eeb2a12ad036b9e422b83a678456d0532797f52906ab60d"
	I0110 08:54:43.987474  333156 cri.go:96] found id: "aad4af12920741da7d171740026ff015c4070b3351587fdb2871778887f3c572"
	I0110 08:54:43.987479  333156 cri.go:96] found id: "4a32fec5d204fdc43d30ee63af5aecc23eab97460b3c2aa63f91be2d5b60a396"
	I0110 08:54:43.987483  333156 cri.go:96] found id: "6fd4d569ed2cfc3edfc4a61498d445f2c777a77a9d8f13453b5ba50f4942e874"
	I0110 08:54:43.987487  333156 cri.go:96] found id: "558dea2141d207b13cc98352cdf540b108631b3833c7fa7d623fd9a61e3b7c49"
	I0110 08:54:43.987491  333156 cri.go:96] found id: "1040eb4ed6b67bd13c53d3da67a4af5ac0ef2ecbedc7b475669549f60d144fcf"
	I0110 08:54:43.987508  333156 cri.go:96] found id: "14dbb08e06af9263af7a59178ddca46dd2574ee6c2dfb71f83e5e8a82e8357a0"
	I0110 08:54:43.987516  333156 cri.go:96] found id: "fd0a8039f1273e5dcd77a9bb5b599799ac405a5ed278be2b9f5d5ec63dec3721"
	I0110 08:54:43.987520  333156 cri.go:96] found id: ""
	I0110 08:54:43.987576  333156 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 08:54:43.999470  333156 retry.go:84] will retry after 200ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:54:43Z" level=error msg="open /run/runc: no such file or directory"
	I0110 08:54:44.172882  333156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:54:44.187391  333156 pause.go:52] kubelet running: false
	I0110 08:54:44.187455  333156 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 08:54:44.330673  333156 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 08:54:44.330799  333156 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 08:54:44.398378  333156 cri.go:96] found id: "aa0ac14c84306d08705b88254a7251d7ce9bb604fb21d63d3bc416ef60ad94aa"
	I0110 08:54:44.398409  333156 cri.go:96] found id: "9632afd09c0841f14d022b7df47bb8ddfded74a6e03714556a038ed3f8c03465"
	I0110 08:54:44.398416  333156 cri.go:96] found id: "393f8485c986054373bba92fd24e2b7d56b9d48329156c2e815c9024cb5c612d"
	I0110 08:54:44.398421  333156 cri.go:96] found id: "6e898f22f40fc95e7eeb2a12ad036b9e422b83a678456d0532797f52906ab60d"
	I0110 08:54:44.398425  333156 cri.go:96] found id: "aad4af12920741da7d171740026ff015c4070b3351587fdb2871778887f3c572"
	I0110 08:54:44.398431  333156 cri.go:96] found id: "4a32fec5d204fdc43d30ee63af5aecc23eab97460b3c2aa63f91be2d5b60a396"
	I0110 08:54:44.398439  333156 cri.go:96] found id: "6fd4d569ed2cfc3edfc4a61498d445f2c777a77a9d8f13453b5ba50f4942e874"
	I0110 08:54:44.398443  333156 cri.go:96] found id: "558dea2141d207b13cc98352cdf540b108631b3833c7fa7d623fd9a61e3b7c49"
	I0110 08:54:44.398447  333156 cri.go:96] found id: "1040eb4ed6b67bd13c53d3da67a4af5ac0ef2ecbedc7b475669549f60d144fcf"
	I0110 08:54:44.398465  333156 cri.go:96] found id: "14dbb08e06af9263af7a59178ddca46dd2574ee6c2dfb71f83e5e8a82e8357a0"
	I0110 08:54:44.398471  333156 cri.go:96] found id: "fd0a8039f1273e5dcd77a9bb5b599799ac405a5ed278be2b9f5d5ec63dec3721"
	I0110 08:54:44.398474  333156 cri.go:96] found id: ""
	I0110 08:54:44.398511  333156 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 08:54:44.889035  333156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:54:44.902930  333156 pause.go:52] kubelet running: false
	I0110 08:54:44.903000  333156 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 08:54:45.095502  333156 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 08:54:45.095585  333156 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 08:54:45.192563  333156 cri.go:96] found id: "aa0ac14c84306d08705b88254a7251d7ce9bb604fb21d63d3bc416ef60ad94aa"
	I0110 08:54:45.192588  333156 cri.go:96] found id: "9632afd09c0841f14d022b7df47bb8ddfded74a6e03714556a038ed3f8c03465"
	I0110 08:54:45.192662  333156 cri.go:96] found id: "393f8485c986054373bba92fd24e2b7d56b9d48329156c2e815c9024cb5c612d"
	I0110 08:54:45.192669  333156 cri.go:96] found id: "6e898f22f40fc95e7eeb2a12ad036b9e422b83a678456d0532797f52906ab60d"
	I0110 08:54:45.192674  333156 cri.go:96] found id: "aad4af12920741da7d171740026ff015c4070b3351587fdb2871778887f3c572"
	I0110 08:54:45.192709  333156 cri.go:96] found id: "4a32fec5d204fdc43d30ee63af5aecc23eab97460b3c2aa63f91be2d5b60a396"
	I0110 08:54:45.192718  333156 cri.go:96] found id: "6fd4d569ed2cfc3edfc4a61498d445f2c777a77a9d8f13453b5ba50f4942e874"
	I0110 08:54:45.192722  333156 cri.go:96] found id: "558dea2141d207b13cc98352cdf540b108631b3833c7fa7d623fd9a61e3b7c49"
	I0110 08:54:45.192727  333156 cri.go:96] found id: "1040eb4ed6b67bd13c53d3da67a4af5ac0ef2ecbedc7b475669549f60d144fcf"
	I0110 08:54:45.192799  333156 cri.go:96] found id: "14dbb08e06af9263af7a59178ddca46dd2574ee6c2dfb71f83e5e8a82e8357a0"
	I0110 08:54:45.192810  333156 cri.go:96] found id: "fd0a8039f1273e5dcd77a9bb5b599799ac405a5ed278be2b9f5d5ec63dec3721"
	I0110 08:54:45.192815  333156 cri.go:96] found id: ""
	I0110 08:54:45.192872  333156 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 08:54:45.635787  333156 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:54:45.652672  333156 pause.go:52] kubelet running: false
	I0110 08:54:45.652764  333156 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 08:54:45.814066  333156 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 08:54:45.814156  333156 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 08:54:45.892043  333156 cri.go:96] found id: "aa0ac14c84306d08705b88254a7251d7ce9bb604fb21d63d3bc416ef60ad94aa"
	I0110 08:54:45.892065  333156 cri.go:96] found id: "9632afd09c0841f14d022b7df47bb8ddfded74a6e03714556a038ed3f8c03465"
	I0110 08:54:45.892074  333156 cri.go:96] found id: "393f8485c986054373bba92fd24e2b7d56b9d48329156c2e815c9024cb5c612d"
	I0110 08:54:45.892079  333156 cri.go:96] found id: "6e898f22f40fc95e7eeb2a12ad036b9e422b83a678456d0532797f52906ab60d"
	I0110 08:54:45.892083  333156 cri.go:96] found id: "aad4af12920741da7d171740026ff015c4070b3351587fdb2871778887f3c572"
	I0110 08:54:45.892089  333156 cri.go:96] found id: "4a32fec5d204fdc43d30ee63af5aecc23eab97460b3c2aa63f91be2d5b60a396"
	I0110 08:54:45.892093  333156 cri.go:96] found id: "6fd4d569ed2cfc3edfc4a61498d445f2c777a77a9d8f13453b5ba50f4942e874"
	I0110 08:54:45.892098  333156 cri.go:96] found id: "558dea2141d207b13cc98352cdf540b108631b3833c7fa7d623fd9a61e3b7c49"
	I0110 08:54:45.892102  333156 cri.go:96] found id: "1040eb4ed6b67bd13c53d3da67a4af5ac0ef2ecbedc7b475669549f60d144fcf"
	I0110 08:54:45.892109  333156 cri.go:96] found id: "14dbb08e06af9263af7a59178ddca46dd2574ee6c2dfb71f83e5e8a82e8357a0"
	I0110 08:54:45.892114  333156 cri.go:96] found id: "fd0a8039f1273e5dcd77a9bb5b599799ac405a5ed278be2b9f5d5ec63dec3721"
	I0110 08:54:45.892118  333156 cri.go:96] found id: ""
	I0110 08:54:45.892160  333156 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 08:54:45.908899  333156 out.go:203] 
	W0110 08:54:45.910163  333156 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:54:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:54:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 08:54:45.910195  333156 out.go:285] * 
	* 
	W0110 08:54:45.912216  333156 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 08:54:45.913480  333156 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-072273 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-072273
helpers_test.go:244: (dbg) docker inspect embed-certs-072273:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "55ee49e3eee1e82dd39a34aa05ca74ac8290e62f4e2e2be1df6922e52b7bc344",
	        "Created": "2026-01-10T08:52:43.607439204Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 320052,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T08:53:48.862030427Z",
	            "FinishedAt": "2026-01-10T08:53:47.033028848Z"
	        },
	        "Image": "sha256:e8ee619afcf8e9008c4fe3e7f2aba5fdbe7a9f0765053ca7dd53ab0df8ab02a5",
	        "ResolvConfPath": "/var/lib/docker/containers/55ee49e3eee1e82dd39a34aa05ca74ac8290e62f4e2e2be1df6922e52b7bc344/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/55ee49e3eee1e82dd39a34aa05ca74ac8290e62f4e2e2be1df6922e52b7bc344/hostname",
	        "HostsPath": "/var/lib/docker/containers/55ee49e3eee1e82dd39a34aa05ca74ac8290e62f4e2e2be1df6922e52b7bc344/hosts",
	        "LogPath": "/var/lib/docker/containers/55ee49e3eee1e82dd39a34aa05ca74ac8290e62f4e2e2be1df6922e52b7bc344/55ee49e3eee1e82dd39a34aa05ca74ac8290e62f4e2e2be1df6922e52b7bc344-json.log",
	        "Name": "/embed-certs-072273",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-072273:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-072273",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "55ee49e3eee1e82dd39a34aa05ca74ac8290e62f4e2e2be1df6922e52b7bc344",
	                "LowerDir": "/var/lib/docker/overlay2/56524a28931c04c257d4895fd7efe2b53022251486e86a9149ff74604d9ab63e-init/diff:/var/lib/docker/overlay2/97b14eb520192356c991915ce74cd6094e6ef948f398ac688b667ea18491ccde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/56524a28931c04c257d4895fd7efe2b53022251486e86a9149ff74604d9ab63e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/56524a28931c04c257d4895fd7efe2b53022251486e86a9149ff74604d9ab63e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/56524a28931c04c257d4895fd7efe2b53022251486e86a9149ff74604d9ab63e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-072273",
	                "Source": "/var/lib/docker/volumes/embed-certs-072273/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-072273",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-072273",
	                "name.minikube.sigs.k8s.io": "embed-certs-072273",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "9d835355d205e114d4187cc6c6e9c2f68d6fd9f0e4acafef2cdd0f66f57e8c10",
	            "SandboxKey": "/var/run/docker/netns/9d835355d205",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-072273": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5339a54148e7314a379bb4609318a80f780708af6dca5aa937db0b5ad6eef145",
	                    "EndpointID": "cb9c7966746d4d43e4f78a515b53971cf1b4c08ca3f1cc0dcf33c62ee0609c41",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "8e:83:f6:00:91:06",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-072273",
	                        "55ee49e3eee1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-072273 -n embed-certs-072273
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-072273 -n embed-certs-072273: exit status 2 (390.397123ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-072273 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-072273 logs -n 25: (1.147346438s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │              PROFILE              │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ addons  │ enable metrics-server -p embed-certs-072273 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-072273                │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ stop    │ -p embed-certs-072273 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-072273                │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ addons  │ enable dashboard -p embed-certs-072273 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-072273                │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ start   │ -p embed-certs-072273 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-072273                │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:54 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-225354 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-225354      │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-225354 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-225354      │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:54 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-225354 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-225354      │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p default-k8s-diff-port-225354 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-225354      │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ image   │ old-k8s-version-093083 image list --format=json                                                                                                                                                                                               │ old-k8s-version-093083            │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ pause   │ -p old-k8s-version-093083 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-093083            │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ delete  │ -p old-k8s-version-093083                                                                                                                                                                                                                     │ old-k8s-version-093083            │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ image   │ no-preload-095312 image list --format=json                                                                                                                                                                                                    │ no-preload-095312                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ pause   │ -p no-preload-095312 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-095312                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ delete  │ -p old-k8s-version-093083                                                                                                                                                                                                                     │ old-k8s-version-093083            │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p newest-cni-582650 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-582650                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ delete  │ -p no-preload-095312                                                                                                                                                                                                                          │ no-preload-095312                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ delete  │ -p no-preload-095312                                                                                                                                                                                                                          │ no-preload-095312                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p test-preload-dl-gcs-424382 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                        │ test-preload-dl-gcs-424382        │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-424382                                                                                                                                                                                                                 │ test-preload-dl-gcs-424382        │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p test-preload-dl-github-434342 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                  │ test-preload-dl-github-434342     │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ image   │ embed-certs-072273 image list --format=json                                                                                                                                                                                                   │ embed-certs-072273                │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ pause   │ -p embed-certs-072273 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-072273                │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ delete  │ -p test-preload-dl-github-434342                                                                                                                                                                                                              │ test-preload-dl-github-434342     │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p test-preload-dl-gcs-cached-077581 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                 │ test-preload-dl-gcs-cached-077581 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-cached-077581                                                                                                                                                                                                          │ test-preload-dl-gcs-cached-077581 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 08:54:45
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 08:54:45.000155  333458 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:54:45.000671  333458 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:54:45.000704  333458 out.go:374] Setting ErrFile to fd 2...
	I0110 08:54:45.000710  333458 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:54:45.001160  333458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:54:45.002107  333458 out.go:368] Setting JSON to false
	I0110 08:54:45.003853  333458 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2237,"bootTime":1768033048,"procs":348,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 08:54:45.003933  333458 start.go:143] virtualization: kvm guest
	I0110 08:54:45.007843  333458 out.go:179] * [test-preload-dl-gcs-cached-077581] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 08:54:45.009134  333458 notify.go:221] Checking for updates...
	I0110 08:54:45.012013  333458 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 08:54:45.013307  333458 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 08:54:45.014768  333458 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:54:45.016885  333458 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-3641/.minikube
	I0110 08:54:45.018787  333458 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 08:54:45.020498  333458 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 08:54:45.022964  333458 config.go:182] Loaded profile config "default-k8s-diff-port-225354": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:54:45.023105  333458 config.go:182] Loaded profile config "embed-certs-072273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:54:45.023234  333458 config.go:182] Loaded profile config "newest-cni-582650": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:54:45.023350  333458 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 08:54:45.056715  333458 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 08:54:45.056928  333458 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:54:45.133422  333458 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2026-01-10 08:54:45.121523334 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:54:45.133572  333458 docker.go:319] overlay module found
	I0110 08:54:45.136267  333458 out.go:179] * Using the docker driver based on user configuration
	I0110 08:54:45.137791  333458 start.go:309] selected driver: docker
	I0110 08:54:45.137810  333458 start.go:928] validating driver "docker" against <nil>
	I0110 08:54:45.137930  333458 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:54:45.210058  333458 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2026-01-10 08:54:45.199476457 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:54:45.210241  333458 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 08:54:45.210765  333458 start_flags.go:417] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0110 08:54:45.210947  333458 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0110 08:54:45.212704  333458 out.go:179] * Using Docker driver with root privileges
	I0110 08:54:45.213902  333458 cni.go:84] Creating CNI manager for ""
	I0110 08:54:45.213969  333458 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 08:54:45.213984  333458 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 08:54:45.214041  333458 start.go:353] cluster config:
	{Name:test-preload-dl-gcs-cached-077581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-rc.2 ClusterName:test-preload-dl-gcs-cached-077581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s
Rosetta:false}
	I0110 08:54:45.215615  333458 out.go:179] * Starting "test-preload-dl-gcs-cached-077581" primary control-plane node in "test-preload-dl-gcs-cached-077581" cluster
	I0110 08:54:45.216853  333458 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 08:54:45.218072  333458 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 08:54:45.219223  333458 preload.go:188] Checking if preload exists for k8s version v1.34.0-rc.2 and runtime crio
	I0110 08:54:45.219265  333458 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0110 08:54:45.219292  333458 cache.go:65] Caching tarball of preloaded images
	I0110 08:54:45.219319  333458 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 08:54:45.219396  333458 preload.go:251] Found /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0110 08:54:45.219413  333458 cache.go:68] Finished verifying existence of preloaded tar for v1.34.0-rc.2 on crio
	I0110 08:54:45.219609  333458 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/test-preload-dl-gcs-cached-077581/config.json ...
	I0110 08:54:45.219635  333458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/test-preload-dl-gcs-cached-077581/config.json: {Name:mkd355fe097cb192d4434fb02c2d35e19a6d11db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:54:45.219816  333458 preload.go:188] Checking if preload exists for k8s version v1.34.0-rc.2 and runtime crio
	I0110 08:54:45.219899  333458 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0-rc.2/bin/linux/amd64/kubectl.sha256
	I0110 08:54:45.244883  333458 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 08:54:45.244911  333458 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 to local cache
	I0110 08:54:45.244999  333458 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local cache directory
	I0110 08:54:45.245018  333458 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local cache directory, skipping pull
	I0110 08:54:45.245023  333458 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in cache, skipping pull
	I0110 08:54:45.245032  333458 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 as a tarball
	I0110 08:54:45.245047  333458 cache.go:243] Successfully downloaded all kic artifacts
	I0110 08:54:45.246779  333458 out.go:179] * Download complete!
	W0110 08:54:41.649436  323767 pod_ready.go:104] pod "coredns-7d764666f9-cjklg" is not "Ready", error: <nil>
	W0110 08:54:43.650131  323767 pod_ready.go:104] pod "coredns-7d764666f9-cjklg" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Jan 10 08:54:17 embed-certs-072273 crio[573]: time="2026-01-10T08:54:17.065427711Z" level=info msg="Started container" PID=1808 containerID=c0ccad35d826145b9aba7700905411a883fd04ab9b315fa64e870f7a145a5834 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-v6n5t/dashboard-metrics-scraper id=42c1dfaf-791f-493d-bf6c-ba464962d8e6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4400936980fb5ebe473cd3ded6d216fd2316446e6a0d0c781bdb2474b12c3a15
	Jan 10 08:54:17 embed-certs-072273 crio[573]: time="2026-01-10T08:54:17.107288477Z" level=info msg="Removing container: c347cd3d0abc07d28d0f9d64612398cd097ef69430d4a97883530b2a6634e2a0" id=2dd8bb47-9ee6-4470-95a0-13fef6cf4bc3 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 08:54:17 embed-certs-072273 crio[573]: time="2026-01-10T08:54:17.117764247Z" level=info msg="Removed container c347cd3d0abc07d28d0f9d64612398cd097ef69430d4a97883530b2a6634e2a0: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-v6n5t/dashboard-metrics-scraper" id=2dd8bb47-9ee6-4470-95a0-13fef6cf4bc3 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 08:54:29 embed-certs-072273 crio[573]: time="2026-01-10T08:54:29.145117024Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=75202a65-c892-4973-a660-c10fde855fe9 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:54:29 embed-certs-072273 crio[573]: time="2026-01-10T08:54:29.148173161Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b2847079-2114-40d4-928e-b43e434565f0 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:54:29 embed-certs-072273 crio[573]: time="2026-01-10T08:54:29.149989323Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=02dc3186-8c34-4cfb-b86e-5b6a9713710b name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:54:29 embed-certs-072273 crio[573]: time="2026-01-10T08:54:29.150166509Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:29 embed-certs-072273 crio[573]: time="2026-01-10T08:54:29.156941466Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:29 embed-certs-072273 crio[573]: time="2026-01-10T08:54:29.157408576Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/6fbaa667884b5df1c3fb207a6db324bcd3d95282497afd4c7bb7332571c59572/merged/etc/passwd: no such file or directory"
	Jan 10 08:54:29 embed-certs-072273 crio[573]: time="2026-01-10T08:54:29.157591767Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/6fbaa667884b5df1c3fb207a6db324bcd3d95282497afd4c7bb7332571c59572/merged/etc/group: no such file or directory"
	Jan 10 08:54:29 embed-certs-072273 crio[573]: time="2026-01-10T08:54:29.158038318Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:29 embed-certs-072273 crio[573]: time="2026-01-10T08:54:29.195812743Z" level=info msg="Created container aa0ac14c84306d08705b88254a7251d7ce9bb604fb21d63d3bc416ef60ad94aa: kube-system/storage-provisioner/storage-provisioner" id=02dc3186-8c34-4cfb-b86e-5b6a9713710b name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:54:29 embed-certs-072273 crio[573]: time="2026-01-10T08:54:29.197279426Z" level=info msg="Starting container: aa0ac14c84306d08705b88254a7251d7ce9bb604fb21d63d3bc416ef60ad94aa" id=d6535a5c-cb67-40e3-8fde-b94a592089c4 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 08:54:29 embed-certs-072273 crio[573]: time="2026-01-10T08:54:29.201305171Z" level=info msg="Started container" PID=1822 containerID=aa0ac14c84306d08705b88254a7251d7ce9bb604fb21d63d3bc416ef60ad94aa description=kube-system/storage-provisioner/storage-provisioner id=d6535a5c-cb67-40e3-8fde-b94a592089c4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bb7633c10b6ab8de997707936cd35f27c3f850b8ae1b49cb31f489017a0d5a72
	Jan 10 08:54:38 embed-certs-072273 crio[573]: time="2026-01-10T08:54:38.025413588Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=73941f8c-5c35-46f8-b619-e40e2ef846ed name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:54:38 embed-certs-072273 crio[573]: time="2026-01-10T08:54:38.026527165Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f22d32f7-c4ec-47d4-a71d-2dcfc4c3b14d name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:54:38 embed-certs-072273 crio[573]: time="2026-01-10T08:54:38.027795187Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-v6n5t/dashboard-metrics-scraper" id=e88d23d2-f445-4057-b22d-c6d77e8a1acc name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:54:38 embed-certs-072273 crio[573]: time="2026-01-10T08:54:38.027950635Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:38 embed-certs-072273 crio[573]: time="2026-01-10T08:54:38.034146332Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:38 embed-certs-072273 crio[573]: time="2026-01-10T08:54:38.034839796Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:38 embed-certs-072273 crio[573]: time="2026-01-10T08:54:38.079670863Z" level=info msg="Created container 14dbb08e06af9263af7a59178ddca46dd2574ee6c2dfb71f83e5e8a82e8357a0: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-v6n5t/dashboard-metrics-scraper" id=e88d23d2-f445-4057-b22d-c6d77e8a1acc name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:54:38 embed-certs-072273 crio[573]: time="2026-01-10T08:54:38.080460603Z" level=info msg="Starting container: 14dbb08e06af9263af7a59178ddca46dd2574ee6c2dfb71f83e5e8a82e8357a0" id=928acfd8-8d23-4f0c-8c47-0a56fddf65f9 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 08:54:38 embed-certs-072273 crio[573]: time="2026-01-10T08:54:38.082762633Z" level=info msg="Started container" PID=1857 containerID=14dbb08e06af9263af7a59178ddca46dd2574ee6c2dfb71f83e5e8a82e8357a0 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-v6n5t/dashboard-metrics-scraper id=928acfd8-8d23-4f0c-8c47-0a56fddf65f9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4400936980fb5ebe473cd3ded6d216fd2316446e6a0d0c781bdb2474b12c3a15
	Jan 10 08:54:38 embed-certs-072273 crio[573]: time="2026-01-10T08:54:38.175882994Z" level=info msg="Removing container: c0ccad35d826145b9aba7700905411a883fd04ab9b315fa64e870f7a145a5834" id=6fa7c1a7-bb4c-4915-9ef3-39538fec88d3 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 08:54:38 embed-certs-072273 crio[573]: time="2026-01-10T08:54:38.185908475Z" level=info msg="Removed container c0ccad35d826145b9aba7700905411a883fd04ab9b315fa64e870f7a145a5834: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-v6n5t/dashboard-metrics-scraper" id=6fa7c1a7-bb4c-4915-9ef3-39538fec88d3 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	14dbb08e06af9       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           8 seconds ago       Exited              dashboard-metrics-scraper   3                   4400936980fb5       dashboard-metrics-scraper-867fb5f87b-v6n5t   kubernetes-dashboard
	aa0ac14c84306       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           17 seconds ago      Running             storage-provisioner         1                   bb7633c10b6ab       storage-provisioner                          kube-system
	fd0a8039f1273       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   40 seconds ago      Running             kubernetes-dashboard        0                   c74c190e75d86       kubernetes-dashboard-b84665fb8-8m7lj         kubernetes-dashboard
	9632afd09c084       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           48 seconds ago      Running             coredns                     0                   b0d65a481ea47       coredns-7d764666f9-ss4nt                     kube-system
	a13dd8c6e4b84       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           48 seconds ago      Running             busybox                     1                   6696e1d8082da       busybox                                      default
	393f8485c9860       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           48 seconds ago      Running             kindnet-cni                 0                   ec2f0773376d5       kindnet-svs4f                                kube-system
	6e898f22f40fc       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           48 seconds ago      Running             kube-proxy                  0                   9204d03604d70       kube-proxy-sndfh                             kube-system
	aad4af1292074       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           48 seconds ago      Exited              storage-provisioner         0                   bb7633c10b6ab       storage-provisioner                          kube-system
	4a32fec5d204f       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           51 seconds ago      Running             kube-scheduler              0                   77f6caba042db       kube-scheduler-embed-certs-072273            kube-system
	6fd4d569ed2cf       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           51 seconds ago      Running             etcd                        0                   f7308b4de7444       etcd-embed-certs-072273                      kube-system
	558dea2141d20       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           51 seconds ago      Running             kube-controller-manager     0                   360ff70c39d59       kube-controller-manager-embed-certs-072273   kube-system
	1040eb4ed6b67       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           51 seconds ago      Running             kube-apiserver              0                   2f6a795824934       kube-apiserver-embed-certs-072273            kube-system
	
	
	==> coredns [9632afd09c0841f14d022b7df47bb8ddfded74a6e03714556a038ed3f8c03465] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:55696 - 30033 "HINFO IN 2602990715531610035.3059569359911151653. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.016309677s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               embed-certs-072273
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-072273
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee
	                    minikube.k8s.io/name=embed-certs-072273
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T08_52_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 08:52:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-072273
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 08:54:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 08:54:27 +0000   Sat, 10 Jan 2026 08:52:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 08:54:27 +0000   Sat, 10 Jan 2026 08:52:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 08:54:27 +0000   Sat, 10 Jan 2026 08:52:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 08:54:27 +0000   Sat, 10 Jan 2026 08:53:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-072273
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 73d0d7ed7bc783a051b563196960b035
	  System UUID:                296a745e-68fc-4733-bca6-ba83ff3ab707
	  Boot ID:                    7c12f492-59b6-440b-986c-741ece916a23
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-7d764666f9-ss4nt                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     104s
	  kube-system                 etcd-embed-certs-072273                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         110s
	  kube-system                 kindnet-svs4f                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-embed-certs-072273             250m (3%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-embed-certs-072273    200m (2%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-sndfh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-embed-certs-072273             100m (1%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-v6n5t    0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-8m7lj          0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  106s  node-controller  Node embed-certs-072273 event: Registered Node embed-certs-072273 in Controller
	  Normal  RegisteredNode  47s   node-controller  Node embed-certs-072273 event: Registered Node embed-certs-072273 in Controller
	
	
	==> dmesg <==
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 4a 03 46 88 66 4e 08 06
	[  +6.406566] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7a 4f c9 a1 45 f4 08 06
	[  +0.001429] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca 5b 5b a9 7c eb 08 06
	[ +24.002020] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7a 6f 49 7e 9d d1 08 06
	[  +0.000366] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e 3a ac 18 98 c9 08 06
	[Jan10 08:52] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4a 05 5c 86 cf df 08 06
	[  +0.000337] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 7a 4f c9 a1 45 f4 08 06
	[  +0.000602] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca 5b 5b a9 7c eb 08 06
	[  +0.739059] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e 32 81 dc 0a 7c 08 06
	[  +1.988700] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 6e ec e4 aa 3d 08 06
	[ +20.040962] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 ac f8 cf fb 84 08 06
	[  +0.000375] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 6e ec e4 aa 3d 08 06
	[  +0.000915] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 32 81 dc 0a 7c 08 06
	
	
	==> etcd [6fd4d569ed2cfc3edfc4a61498d445f2c777a77a9d8f13453b5ba50f4942e874] <==
	{"level":"info","ts":"2026-01-10T08:53:55.604252Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2026-01-10T08:53:55.605601Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2026-01-10T08:53:55.604414Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2026-01-10T08:53:55.605756Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T08:53:55.605034Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-10T08:53:55.605898Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-10T08:53:56.294284Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T08:53:56.294361Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T08:53:56.294474Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2026-01-10T08:53:56.294498Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T08:53:56.294517Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T08:53:56.295403Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2026-01-10T08:53:56.295441Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T08:53:56.295463Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2026-01-10T08:53:56.295473Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2026-01-10T08:53:56.296770Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:embed-certs-072273 ClientURLs:[https://192.168.103.2:2379]}","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T08:53:56.296794Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T08:53:56.296812Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T08:53:56.296994Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T08:53:56.297023Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T08:53:56.298032Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T08:53:56.298390Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T08:53:56.301296Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2026-01-10T08:53:56.301439Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2026-01-10T08:54:33.726048Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.752317ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873791269135125751 > lease_revoke:<id:40899ba71cb70c6a>","response":"size:28"}
	
	
	==> kernel <==
	 08:54:47 up 37 min,  0 user,  load average: 5.84, 4.39, 2.79
	Linux embed-certs-072273 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [393f8485c986054373bba92fd24e2b7d56b9d48329156c2e815c9024cb5c612d] <==
	I0110 08:53:58.563988       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 08:53:58.564284       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I0110 08:53:58.564463       1 main.go:148] setting mtu 1500 for CNI 
	I0110 08:53:58.564485       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 08:53:58.564510       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T08:53:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 08:53:58.766846       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 08:53:58.766902       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 08:53:58.766916       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 08:53:58.860305       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 08:53:59.260450       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 08:53:59.260489       1 metrics.go:72] Registering metrics
	I0110 08:53:59.260612       1 controller.go:711] "Syncing nftables rules"
	I0110 08:54:08.767822       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0110 08:54:08.767911       1 main.go:301] handling current node
	I0110 08:54:18.767904       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0110 08:54:18.767962       1 main.go:301] handling current node
	I0110 08:54:28.766885       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0110 08:54:28.766923       1 main.go:301] handling current node
	I0110 08:54:38.767818       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0110 08:54:38.767853       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1040eb4ed6b67bd13c53d3da67a4af5ac0ef2ecbedc7b475669549f60d144fcf] <==
	I0110 08:53:57.241399       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0110 08:53:57.240886       1 aggregator.go:187] initial CRD sync complete...
	I0110 08:53:57.242150       1 autoregister_controller.go:144] Starting autoregister controller
	I0110 08:53:57.242159       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0110 08:53:57.242171       1 cache.go:39] Caches are synced for autoregister controller
	I0110 08:53:57.241412       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0110 08:53:57.241485       1 cache.go:39] Caches are synced for LocalAvailability controller
	E0110 08:53:57.248437       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0110 08:53:57.249704       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0110 08:53:57.293728       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0110 08:53:57.301594       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:57.301628       1 policy_source.go:248] refreshing policies
	I0110 08:53:57.311115       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 08:53:57.496180       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 08:53:57.527425       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 08:53:57.547385       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 08:53:57.553888       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 08:53:57.560933       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 08:53:57.591647       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.100.207"}
	I0110 08:53:57.601654       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.74.154"}
	I0110 08:53:58.143468       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 08:54:00.791506       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 08:54:00.942039       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 08:54:01.042163       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 08:54:01.042163       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [558dea2141d207b13cc98352cdf540b108631b3833c7fa7d623fd9a61e3b7c49] <==
	I0110 08:54:00.398264       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:00.398368       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:00.398414       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:00.398990       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:00.399082       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:00.399128       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:00.399158       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:00.399192       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:00.399246       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:00.399291       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:00.399472       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:00.399495       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:00.399597       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I0110 08:54:00.399679       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="embed-certs-072273"
	I0110 08:54:00.399774       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:00.399794       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0110 08:54:00.399985       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:00.400068       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:00.400124       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:00.401232       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:54:00.421487       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:00.500304       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:00.500344       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 08:54:00.500354       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 08:54:00.501413       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [6e898f22f40fc95e7eeb2a12ad036b9e422b83a678456d0532797f52906ab60d] <==
	I0110 08:53:58.435586       1 server_linux.go:53] "Using iptables proxy"
	I0110 08:53:58.515455       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:53:58.615636       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:58.615689       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E0110 08:53:58.615833       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 08:53:58.634346       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 08:53:58.634394       1 server_linux.go:136] "Using iptables Proxier"
	I0110 08:53:58.640064       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 08:53:58.640450       1 server.go:529] "Version info" version="v1.35.0"
	I0110 08:53:58.640476       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 08:53:58.641900       1 config.go:309] "Starting node config controller"
	I0110 08:53:58.641985       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 08:53:58.641919       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 08:53:58.642003       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 08:53:58.642006       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 08:53:58.641937       1 config.go:200] "Starting service config controller"
	I0110 08:53:58.642016       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 08:53:58.641949       1 config.go:106] "Starting endpoint slice config controller"
	I0110 08:53:58.642033       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 08:53:58.742813       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 08:53:58.742835       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0110 08:53:58.742886       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [4a32fec5d204fdc43d30ee63af5aecc23eab97460b3c2aa63f91be2d5b60a396] <==
	I0110 08:53:55.804810       1 serving.go:386] Generated self-signed cert in-memory
	W0110 08:53:57.172957       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0110 08:53:57.172995       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0110 08:53:57.173006       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0110 08:53:57.173016       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0110 08:53:57.222498       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0110 08:53:57.222538       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 08:53:57.225491       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 08:53:57.225521       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:53:57.225775       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0110 08:53:57.225851       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0110 08:53:57.326323       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 08:54:14 embed-certs-072273 kubelet[741]: E0110 08:54:14.094866     741 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-embed-certs-072273" containerName="etcd"
	Jan 10 08:54:17 embed-certs-072273 kubelet[741]: E0110 08:54:17.024832     741 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-v6n5t" containerName="dashboard-metrics-scraper"
	Jan 10 08:54:17 embed-certs-072273 kubelet[741]: I0110 08:54:17.024876     741 scope.go:122] "RemoveContainer" containerID="c347cd3d0abc07d28d0f9d64612398cd097ef69430d4a97883530b2a6634e2a0"
	Jan 10 08:54:17 embed-certs-072273 kubelet[741]: I0110 08:54:17.105794     741 scope.go:122] "RemoveContainer" containerID="c347cd3d0abc07d28d0f9d64612398cd097ef69430d4a97883530b2a6634e2a0"
	Jan 10 08:54:17 embed-certs-072273 kubelet[741]: E0110 08:54:17.106042     741 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-v6n5t" containerName="dashboard-metrics-scraper"
	Jan 10 08:54:17 embed-certs-072273 kubelet[741]: I0110 08:54:17.106087     741 scope.go:122] "RemoveContainer" containerID="c0ccad35d826145b9aba7700905411a883fd04ab9b315fa64e870f7a145a5834"
	Jan 10 08:54:17 embed-certs-072273 kubelet[741]: E0110 08:54:17.106306     741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-v6n5t_kubernetes-dashboard(d3b8021c-8b89-489d-8a02-b1372816dce5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-v6n5t" podUID="d3b8021c-8b89-489d-8a02-b1372816dce5"
	Jan 10 08:54:19 embed-certs-072273 kubelet[741]: E0110 08:54:19.327558     741 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-v6n5t" containerName="dashboard-metrics-scraper"
	Jan 10 08:54:19 embed-certs-072273 kubelet[741]: I0110 08:54:19.327605     741 scope.go:122] "RemoveContainer" containerID="c0ccad35d826145b9aba7700905411a883fd04ab9b315fa64e870f7a145a5834"
	Jan 10 08:54:19 embed-certs-072273 kubelet[741]: E0110 08:54:19.327828     741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-v6n5t_kubernetes-dashboard(d3b8021c-8b89-489d-8a02-b1372816dce5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-v6n5t" podUID="d3b8021c-8b89-489d-8a02-b1372816dce5"
	Jan 10 08:54:29 embed-certs-072273 kubelet[741]: I0110 08:54:29.144130     741 scope.go:122] "RemoveContainer" containerID="aad4af12920741da7d171740026ff015c4070b3351587fdb2871778887f3c572"
	Jan 10 08:54:30 embed-certs-072273 kubelet[741]: E0110 08:54:30.051191     741 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-ss4nt" containerName="coredns"
	Jan 10 08:54:38 embed-certs-072273 kubelet[741]: E0110 08:54:38.024618     741 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-v6n5t" containerName="dashboard-metrics-scraper"
	Jan 10 08:54:38 embed-certs-072273 kubelet[741]: I0110 08:54:38.024671     741 scope.go:122] "RemoveContainer" containerID="c0ccad35d826145b9aba7700905411a883fd04ab9b315fa64e870f7a145a5834"
	Jan 10 08:54:38 embed-certs-072273 kubelet[741]: I0110 08:54:38.174280     741 scope.go:122] "RemoveContainer" containerID="c0ccad35d826145b9aba7700905411a883fd04ab9b315fa64e870f7a145a5834"
	Jan 10 08:54:38 embed-certs-072273 kubelet[741]: E0110 08:54:38.174551     741 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-v6n5t" containerName="dashboard-metrics-scraper"
	Jan 10 08:54:38 embed-certs-072273 kubelet[741]: I0110 08:54:38.174594     741 scope.go:122] "RemoveContainer" containerID="14dbb08e06af9263af7a59178ddca46dd2574ee6c2dfb71f83e5e8a82e8357a0"
	Jan 10 08:54:38 embed-certs-072273 kubelet[741]: E0110 08:54:38.174829     741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-v6n5t_kubernetes-dashboard(d3b8021c-8b89-489d-8a02-b1372816dce5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-v6n5t" podUID="d3b8021c-8b89-489d-8a02-b1372816dce5"
	Jan 10 08:54:39 embed-certs-072273 kubelet[741]: E0110 08:54:39.327329     741 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-v6n5t" containerName="dashboard-metrics-scraper"
	Jan 10 08:54:39 embed-certs-072273 kubelet[741]: I0110 08:54:39.327390     741 scope.go:122] "RemoveContainer" containerID="14dbb08e06af9263af7a59178ddca46dd2574ee6c2dfb71f83e5e8a82e8357a0"
	Jan 10 08:54:39 embed-certs-072273 kubelet[741]: E0110 08:54:39.327605     741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-v6n5t_kubernetes-dashboard(d3b8021c-8b89-489d-8a02-b1372816dce5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-v6n5t" podUID="d3b8021c-8b89-489d-8a02-b1372816dce5"
	Jan 10 08:54:43 embed-certs-072273 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 08:54:43 embed-certs-072273 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 08:54:43 embed-certs-072273 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 08:54:43 embed-certs-072273 systemd[1]: kubelet.service: Consumed 1.706s CPU time.
	
	
	==> kubernetes-dashboard [fd0a8039f1273e5dcd77a9bb5b599799ac405a5ed278be2b9f5d5ec63dec3721] <==
	2026/01/10 08:54:06 Starting overwatch
	2026/01/10 08:54:06 Using namespace: kubernetes-dashboard
	2026/01/10 08:54:06 Using in-cluster config to connect to apiserver
	2026/01/10 08:54:06 Using secret token for csrf signing
	2026/01/10 08:54:06 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/10 08:54:06 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/10 08:54:06 Successful initial request to the apiserver, version: v1.35.0
	2026/01/10 08:54:06 Generating JWE encryption key
	2026/01/10 08:54:06 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/10 08:54:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/10 08:54:06 Initializing JWE encryption key from synchronized object
	2026/01/10 08:54:06 Creating in-cluster Sidecar client
	2026/01/10 08:54:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 08:54:06 Serving insecurely on HTTP port: 9090
	2026/01/10 08:54:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [aa0ac14c84306d08705b88254a7251d7ce9bb604fb21d63d3bc416ef60ad94aa] <==
	I0110 08:54:29.222329       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 08:54:29.234538       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 08:54:29.234671       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0110 08:54:29.237374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:32.693310       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:36.953796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:40.553200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:43.607573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:46.630336       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:46.635256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 08:54:46.635412       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 08:54:46.635486       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"77ff52c0-7d74-49c1-b5d8-f06214a410f8", APIVersion:"v1", ResourceVersion:"641", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-072273_5515f0bc-fa38-41b2-9348-5f72557a8c83 became leader
	I0110 08:54:46.635637       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-072273_5515f0bc-fa38-41b2-9348-5f72557a8c83!
	W0110 08:54:46.637565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:46.641138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 08:54:46.736156       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-072273_5515f0bc-fa38-41b2-9348-5f72557a8c83!
	
	
	==> storage-provisioner [aad4af12920741da7d171740026ff015c4070b3351587fdb2871778887f3c572] <==
	I0110 08:53:58.403114       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0110 08:54:28.406104       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-072273 -n embed-certs-072273
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-072273 -n embed-certs-072273: exit status 2 (373.324751ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-072273 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-072273
helpers_test.go:244: (dbg) docker inspect embed-certs-072273:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "55ee49e3eee1e82dd39a34aa05ca74ac8290e62f4e2e2be1df6922e52b7bc344",
	        "Created": "2026-01-10T08:52:43.607439204Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 320052,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T08:53:48.862030427Z",
	            "FinishedAt": "2026-01-10T08:53:47.033028848Z"
	        },
	        "Image": "sha256:e8ee619afcf8e9008c4fe3e7f2aba5fdbe7a9f0765053ca7dd53ab0df8ab02a5",
	        "ResolvConfPath": "/var/lib/docker/containers/55ee49e3eee1e82dd39a34aa05ca74ac8290e62f4e2e2be1df6922e52b7bc344/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/55ee49e3eee1e82dd39a34aa05ca74ac8290e62f4e2e2be1df6922e52b7bc344/hostname",
	        "HostsPath": "/var/lib/docker/containers/55ee49e3eee1e82dd39a34aa05ca74ac8290e62f4e2e2be1df6922e52b7bc344/hosts",
	        "LogPath": "/var/lib/docker/containers/55ee49e3eee1e82dd39a34aa05ca74ac8290e62f4e2e2be1df6922e52b7bc344/55ee49e3eee1e82dd39a34aa05ca74ac8290e62f4e2e2be1df6922e52b7bc344-json.log",
	        "Name": "/embed-certs-072273",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-072273:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-072273",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "55ee49e3eee1e82dd39a34aa05ca74ac8290e62f4e2e2be1df6922e52b7bc344",
	                "LowerDir": "/var/lib/docker/overlay2/56524a28931c04c257d4895fd7efe2b53022251486e86a9149ff74604d9ab63e-init/diff:/var/lib/docker/overlay2/97b14eb520192356c991915ce74cd6094e6ef948f398ac688b667ea18491ccde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/56524a28931c04c257d4895fd7efe2b53022251486e86a9149ff74604d9ab63e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/56524a28931c04c257d4895fd7efe2b53022251486e86a9149ff74604d9ab63e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/56524a28931c04c257d4895fd7efe2b53022251486e86a9149ff74604d9ab63e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-072273",
	                "Source": "/var/lib/docker/volumes/embed-certs-072273/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-072273",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-072273",
	                "name.minikube.sigs.k8s.io": "embed-certs-072273",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "9d835355d205e114d4187cc6c6e9c2f68d6fd9f0e4acafef2cdd0f66f57e8c10",
	            "SandboxKey": "/var/run/docker/netns/9d835355d205",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-072273": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5339a54148e7314a379bb4609318a80f780708af6dca5aa937db0b5ad6eef145",
	                    "EndpointID": "cb9c7966746d4d43e4f78a515b53971cf1b4c08ca3f1cc0dcf33c62ee0609c41",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "8e:83:f6:00:91:06",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-072273",
	                        "55ee49e3eee1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-072273 -n embed-certs-072273
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-072273 -n embed-certs-072273: exit status 2 (328.482935ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-072273 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-072273 logs -n 25: (1.141074472s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │              PROFILE              │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ addons  │ enable metrics-server -p embed-certs-072273 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-072273                │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ stop    │ -p embed-certs-072273 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-072273                │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ addons  │ enable dashboard -p embed-certs-072273 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-072273                │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:53 UTC │
	│ start   │ -p embed-certs-072273 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-072273                │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:54 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-225354 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-225354      │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-225354 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-225354      │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:54 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-225354 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-225354      │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p default-k8s-diff-port-225354 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-225354      │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ image   │ old-k8s-version-093083 image list --format=json                                                                                                                                                                                               │ old-k8s-version-093083            │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ pause   │ -p old-k8s-version-093083 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-093083            │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ delete  │ -p old-k8s-version-093083                                                                                                                                                                                                                     │ old-k8s-version-093083            │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ image   │ no-preload-095312 image list --format=json                                                                                                                                                                                                    │ no-preload-095312                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ pause   │ -p no-preload-095312 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-095312                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ delete  │ -p old-k8s-version-093083                                                                                                                                                                                                                     │ old-k8s-version-093083            │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p newest-cni-582650 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-582650                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ delete  │ -p no-preload-095312                                                                                                                                                                                                                          │ no-preload-095312                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ delete  │ -p no-preload-095312                                                                                                                                                                                                                          │ no-preload-095312                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p test-preload-dl-gcs-424382 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                        │ test-preload-dl-gcs-424382        │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-424382                                                                                                                                                                                                                 │ test-preload-dl-gcs-424382        │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p test-preload-dl-github-434342 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                  │ test-preload-dl-github-434342     │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ image   │ embed-certs-072273 image list --format=json                                                                                                                                                                                                   │ embed-certs-072273                │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ pause   │ -p embed-certs-072273 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-072273                │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ delete  │ -p test-preload-dl-github-434342                                                                                                                                                                                                              │ test-preload-dl-github-434342     │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p test-preload-dl-gcs-cached-077581 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                 │ test-preload-dl-gcs-cached-077581 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-cached-077581                                                                                                                                                                                                          │ test-preload-dl-gcs-cached-077581 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 08:54:45
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 08:54:45.000155  333458 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:54:45.000671  333458 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:54:45.000704  333458 out.go:374] Setting ErrFile to fd 2...
	I0110 08:54:45.000710  333458 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:54:45.001160  333458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:54:45.002107  333458 out.go:368] Setting JSON to false
	I0110 08:54:45.003853  333458 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2237,"bootTime":1768033048,"procs":348,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 08:54:45.003933  333458 start.go:143] virtualization: kvm guest
	I0110 08:54:45.007843  333458 out.go:179] * [test-preload-dl-gcs-cached-077581] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 08:54:45.009134  333458 notify.go:221] Checking for updates...
	I0110 08:54:45.012013  333458 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 08:54:45.013307  333458 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 08:54:45.014768  333458 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:54:45.016885  333458 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-3641/.minikube
	I0110 08:54:45.018787  333458 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 08:54:45.020498  333458 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 08:54:45.022964  333458 config.go:182] Loaded profile config "default-k8s-diff-port-225354": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:54:45.023105  333458 config.go:182] Loaded profile config "embed-certs-072273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:54:45.023234  333458 config.go:182] Loaded profile config "newest-cni-582650": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:54:45.023350  333458 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 08:54:45.056715  333458 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 08:54:45.056928  333458 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:54:45.133422  333458 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2026-01-10 08:54:45.121523334 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:54:45.133572  333458 docker.go:319] overlay module found
	I0110 08:54:45.136267  333458 out.go:179] * Using the docker driver based on user configuration
	I0110 08:54:45.137791  333458 start.go:309] selected driver: docker
	I0110 08:54:45.137810  333458 start.go:928] validating driver "docker" against <nil>
	I0110 08:54:45.137930  333458 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:54:45.210058  333458 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2026-01-10 08:54:45.199476457 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:54:45.210241  333458 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 08:54:45.210765  333458 start_flags.go:417] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0110 08:54:45.210947  333458 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0110 08:54:45.212704  333458 out.go:179] * Using Docker driver with root privileges
	I0110 08:54:45.213902  333458 cni.go:84] Creating CNI manager for ""
	I0110 08:54:45.213969  333458 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 08:54:45.213984  333458 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 08:54:45.214041  333458 start.go:353] cluster config:
	{Name:test-preload-dl-gcs-cached-077581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-rc.2 ClusterName:test-preload-dl-gcs-cached-077581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s
Rosetta:false}
	I0110 08:54:45.215615  333458 out.go:179] * Starting "test-preload-dl-gcs-cached-077581" primary control-plane node in "test-preload-dl-gcs-cached-077581" cluster
	I0110 08:54:45.216853  333458 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 08:54:45.218072  333458 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 08:54:45.219223  333458 preload.go:188] Checking if preload exists for k8s version v1.34.0-rc.2 and runtime crio
	I0110 08:54:45.219265  333458 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0110 08:54:45.219292  333458 cache.go:65] Caching tarball of preloaded images
	I0110 08:54:45.219319  333458 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 08:54:45.219396  333458 preload.go:251] Found /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0110 08:54:45.219413  333458 cache.go:68] Finished verifying existence of preloaded tar for v1.34.0-rc.2 on crio
	I0110 08:54:45.219609  333458 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/test-preload-dl-gcs-cached-077581/config.json ...
	I0110 08:54:45.219635  333458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/test-preload-dl-gcs-cached-077581/config.json: {Name:mkd355fe097cb192d4434fb02c2d35e19a6d11db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:54:45.219816  333458 preload.go:188] Checking if preload exists for k8s version v1.34.0-rc.2 and runtime crio
	I0110 08:54:45.219899  333458 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0-rc.2/bin/linux/amd64/kubectl.sha256
	I0110 08:54:45.244883  333458 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 08:54:45.244911  333458 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 to local cache
	I0110 08:54:45.244999  333458 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local cache directory
	I0110 08:54:45.245018  333458 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local cache directory, skipping pull
	I0110 08:54:45.245023  333458 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in cache, skipping pull
	I0110 08:54:45.245032  333458 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 as a tarball
	I0110 08:54:45.245047  333458 cache.go:243] Successfully downloaded all kic artifacts
	I0110 08:54:45.246779  333458 out.go:179] * Download complete!
	W0110 08:54:41.649436  323767 pod_ready.go:104] pod "coredns-7d764666f9-cjklg" is not "Ready", error: <nil>
	W0110 08:54:43.650131  323767 pod_ready.go:104] pod "coredns-7d764666f9-cjklg" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Jan 10 08:54:17 embed-certs-072273 crio[573]: time="2026-01-10T08:54:17.065427711Z" level=info msg="Started container" PID=1808 containerID=c0ccad35d826145b9aba7700905411a883fd04ab9b315fa64e870f7a145a5834 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-v6n5t/dashboard-metrics-scraper id=42c1dfaf-791f-493d-bf6c-ba464962d8e6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4400936980fb5ebe473cd3ded6d216fd2316446e6a0d0c781bdb2474b12c3a15
	Jan 10 08:54:17 embed-certs-072273 crio[573]: time="2026-01-10T08:54:17.107288477Z" level=info msg="Removing container: c347cd3d0abc07d28d0f9d64612398cd097ef69430d4a97883530b2a6634e2a0" id=2dd8bb47-9ee6-4470-95a0-13fef6cf4bc3 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 08:54:17 embed-certs-072273 crio[573]: time="2026-01-10T08:54:17.117764247Z" level=info msg="Removed container c347cd3d0abc07d28d0f9d64612398cd097ef69430d4a97883530b2a6634e2a0: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-v6n5t/dashboard-metrics-scraper" id=2dd8bb47-9ee6-4470-95a0-13fef6cf4bc3 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 08:54:29 embed-certs-072273 crio[573]: time="2026-01-10T08:54:29.145117024Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=75202a65-c892-4973-a660-c10fde855fe9 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:54:29 embed-certs-072273 crio[573]: time="2026-01-10T08:54:29.148173161Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=b2847079-2114-40d4-928e-b43e434565f0 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:54:29 embed-certs-072273 crio[573]: time="2026-01-10T08:54:29.149989323Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=02dc3186-8c34-4cfb-b86e-5b6a9713710b name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:54:29 embed-certs-072273 crio[573]: time="2026-01-10T08:54:29.150166509Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:29 embed-certs-072273 crio[573]: time="2026-01-10T08:54:29.156941466Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:29 embed-certs-072273 crio[573]: time="2026-01-10T08:54:29.157408576Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/6fbaa667884b5df1c3fb207a6db324bcd3d95282497afd4c7bb7332571c59572/merged/etc/passwd: no such file or directory"
	Jan 10 08:54:29 embed-certs-072273 crio[573]: time="2026-01-10T08:54:29.157591767Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/6fbaa667884b5df1c3fb207a6db324bcd3d95282497afd4c7bb7332571c59572/merged/etc/group: no such file or directory"
	Jan 10 08:54:29 embed-certs-072273 crio[573]: time="2026-01-10T08:54:29.158038318Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:29 embed-certs-072273 crio[573]: time="2026-01-10T08:54:29.195812743Z" level=info msg="Created container aa0ac14c84306d08705b88254a7251d7ce9bb604fb21d63d3bc416ef60ad94aa: kube-system/storage-provisioner/storage-provisioner" id=02dc3186-8c34-4cfb-b86e-5b6a9713710b name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:54:29 embed-certs-072273 crio[573]: time="2026-01-10T08:54:29.197279426Z" level=info msg="Starting container: aa0ac14c84306d08705b88254a7251d7ce9bb604fb21d63d3bc416ef60ad94aa" id=d6535a5c-cb67-40e3-8fde-b94a592089c4 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 08:54:29 embed-certs-072273 crio[573]: time="2026-01-10T08:54:29.201305171Z" level=info msg="Started container" PID=1822 containerID=aa0ac14c84306d08705b88254a7251d7ce9bb604fb21d63d3bc416ef60ad94aa description=kube-system/storage-provisioner/storage-provisioner id=d6535a5c-cb67-40e3-8fde-b94a592089c4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bb7633c10b6ab8de997707936cd35f27c3f850b8ae1b49cb31f489017a0d5a72
	Jan 10 08:54:38 embed-certs-072273 crio[573]: time="2026-01-10T08:54:38.025413588Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=73941f8c-5c35-46f8-b619-e40e2ef846ed name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:54:38 embed-certs-072273 crio[573]: time="2026-01-10T08:54:38.026527165Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f22d32f7-c4ec-47d4-a71d-2dcfc4c3b14d name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:54:38 embed-certs-072273 crio[573]: time="2026-01-10T08:54:38.027795187Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-v6n5t/dashboard-metrics-scraper" id=e88d23d2-f445-4057-b22d-c6d77e8a1acc name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:54:38 embed-certs-072273 crio[573]: time="2026-01-10T08:54:38.027950635Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:38 embed-certs-072273 crio[573]: time="2026-01-10T08:54:38.034146332Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:38 embed-certs-072273 crio[573]: time="2026-01-10T08:54:38.034839796Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:38 embed-certs-072273 crio[573]: time="2026-01-10T08:54:38.079670863Z" level=info msg="Created container 14dbb08e06af9263af7a59178ddca46dd2574ee6c2dfb71f83e5e8a82e8357a0: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-v6n5t/dashboard-metrics-scraper" id=e88d23d2-f445-4057-b22d-c6d77e8a1acc name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:54:38 embed-certs-072273 crio[573]: time="2026-01-10T08:54:38.080460603Z" level=info msg="Starting container: 14dbb08e06af9263af7a59178ddca46dd2574ee6c2dfb71f83e5e8a82e8357a0" id=928acfd8-8d23-4f0c-8c47-0a56fddf65f9 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 08:54:38 embed-certs-072273 crio[573]: time="2026-01-10T08:54:38.082762633Z" level=info msg="Started container" PID=1857 containerID=14dbb08e06af9263af7a59178ddca46dd2574ee6c2dfb71f83e5e8a82e8357a0 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-v6n5t/dashboard-metrics-scraper id=928acfd8-8d23-4f0c-8c47-0a56fddf65f9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4400936980fb5ebe473cd3ded6d216fd2316446e6a0d0c781bdb2474b12c3a15
	Jan 10 08:54:38 embed-certs-072273 crio[573]: time="2026-01-10T08:54:38.175882994Z" level=info msg="Removing container: c0ccad35d826145b9aba7700905411a883fd04ab9b315fa64e870f7a145a5834" id=6fa7c1a7-bb4c-4915-9ef3-39538fec88d3 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 08:54:38 embed-certs-072273 crio[573]: time="2026-01-10T08:54:38.185908475Z" level=info msg="Removed container c0ccad35d826145b9aba7700905411a883fd04ab9b315fa64e870f7a145a5834: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-v6n5t/dashboard-metrics-scraper" id=6fa7c1a7-bb4c-4915-9ef3-39538fec88d3 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	14dbb08e06af9       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           10 seconds ago      Exited              dashboard-metrics-scraper   3                   4400936980fb5       dashboard-metrics-scraper-867fb5f87b-v6n5t   kubernetes-dashboard
	aa0ac14c84306       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   bb7633c10b6ab       storage-provisioner                          kube-system
	fd0a8039f1273       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago      Running             kubernetes-dashboard        0                   c74c190e75d86       kubernetes-dashboard-b84665fb8-8m7lj         kubernetes-dashboard
	9632afd09c084       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           50 seconds ago      Running             coredns                     0                   b0d65a481ea47       coredns-7d764666f9-ss4nt                     kube-system
	a13dd8c6e4b84       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   6696e1d8082da       busybox                                      default
	393f8485c9860       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           50 seconds ago      Running             kindnet-cni                 0                   ec2f0773376d5       kindnet-svs4f                                kube-system
	6e898f22f40fc       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           50 seconds ago      Running             kube-proxy                  0                   9204d03604d70       kube-proxy-sndfh                             kube-system
	aad4af1292074       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   bb7633c10b6ab       storage-provisioner                          kube-system
	4a32fec5d204f       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           53 seconds ago      Running             kube-scheduler              0                   77f6caba042db       kube-scheduler-embed-certs-072273            kube-system
	6fd4d569ed2cf       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           53 seconds ago      Running             etcd                        0                   f7308b4de7444       etcd-embed-certs-072273                      kube-system
	558dea2141d20       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           53 seconds ago      Running             kube-controller-manager     0                   360ff70c39d59       kube-controller-manager-embed-certs-072273   kube-system
	1040eb4ed6b67       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           53 seconds ago      Running             kube-apiserver              0                   2f6a795824934       kube-apiserver-embed-certs-072273            kube-system
	
	
	==> coredns [9632afd09c0841f14d022b7df47bb8ddfded74a6e03714556a038ed3f8c03465] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:55696 - 30033 "HINFO IN 2602990715531610035.3059569359911151653. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.016309677s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               embed-certs-072273
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-072273
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee
	                    minikube.k8s.io/name=embed-certs-072273
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T08_52_58_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 08:52:55 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-072273
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 08:54:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 08:54:27 +0000   Sat, 10 Jan 2026 08:52:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 08:54:27 +0000   Sat, 10 Jan 2026 08:52:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 08:54:27 +0000   Sat, 10 Jan 2026 08:52:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 08:54:27 +0000   Sat, 10 Jan 2026 08:53:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-072273
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 73d0d7ed7bc783a051b563196960b035
	  System UUID:                296a745e-68fc-4733-bca6-ba83ff3ab707
	  Boot ID:                    7c12f492-59b6-440b-986c-741ece916a23
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-7d764666f9-ss4nt                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-embed-certs-072273                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         112s
	  kube-system                 kindnet-svs4f                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-embed-certs-072273             250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-embed-certs-072273    200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-sndfh                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-embed-certs-072273             100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-v6n5t    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-8m7lj          0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  108s  node-controller  Node embed-certs-072273 event: Registered Node embed-certs-072273 in Controller
	  Normal  RegisteredNode  49s   node-controller  Node embed-certs-072273 event: Registered Node embed-certs-072273 in Controller
	
	
	==> dmesg <==
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 4a 03 46 88 66 4e 08 06
	[  +6.406566] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7a 4f c9 a1 45 f4 08 06
	[  +0.001429] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca 5b 5b a9 7c eb 08 06
	[ +24.002020] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7a 6f 49 7e 9d d1 08 06
	[  +0.000366] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e 3a ac 18 98 c9 08 06
	[Jan10 08:52] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4a 05 5c 86 cf df 08 06
	[  +0.000337] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 7a 4f c9 a1 45 f4 08 06
	[  +0.000602] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca 5b 5b a9 7c eb 08 06
	[  +0.739059] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e 32 81 dc 0a 7c 08 06
	[  +1.988700] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 6e ec e4 aa 3d 08 06
	[ +20.040962] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 ac f8 cf fb 84 08 06
	[  +0.000375] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 6e ec e4 aa 3d 08 06
	[  +0.000915] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 32 81 dc 0a 7c 08 06
	
	
	==> etcd [6fd4d569ed2cfc3edfc4a61498d445f2c777a77a9d8f13453b5ba50f4942e874] <==
	{"level":"info","ts":"2026-01-10T08:53:55.604252Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2026-01-10T08:53:55.605601Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2026-01-10T08:53:55.604414Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2026-01-10T08:53:55.605756Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T08:53:55.605034Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-10T08:53:55.605898Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-10T08:53:56.294284Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T08:53:56.294361Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T08:53:56.294474Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2026-01-10T08:53:56.294498Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T08:53:56.294517Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T08:53:56.295403Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2026-01-10T08:53:56.295441Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T08:53:56.295463Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2026-01-10T08:53:56.295473Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2026-01-10T08:53:56.296770Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:embed-certs-072273 ClientURLs:[https://192.168.103.2:2379]}","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T08:53:56.296794Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T08:53:56.296812Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T08:53:56.296994Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T08:53:56.297023Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T08:53:56.298032Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T08:53:56.298390Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T08:53:56.301296Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2026-01-10T08:53:56.301439Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2026-01-10T08:54:33.726048Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.752317ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873791269135125751 > lease_revoke:<id:40899ba71cb70c6a>","response":"size:28"}
	
	
	==> kernel <==
	 08:54:49 up 37 min,  0 user,  load average: 5.84, 4.39, 2.79
	Linux embed-certs-072273 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [393f8485c986054373bba92fd24e2b7d56b9d48329156c2e815c9024cb5c612d] <==
	I0110 08:53:58.563988       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 08:53:58.564284       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I0110 08:53:58.564463       1 main.go:148] setting mtu 1500 for CNI 
	I0110 08:53:58.564485       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 08:53:58.564510       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T08:53:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 08:53:58.766846       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 08:53:58.766902       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 08:53:58.766916       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 08:53:58.860305       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 08:53:59.260450       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 08:53:59.260489       1 metrics.go:72] Registering metrics
	I0110 08:53:59.260612       1 controller.go:711] "Syncing nftables rules"
	I0110 08:54:08.767822       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0110 08:54:08.767911       1 main.go:301] handling current node
	I0110 08:54:18.767904       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0110 08:54:18.767962       1 main.go:301] handling current node
	I0110 08:54:28.766885       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0110 08:54:28.766923       1 main.go:301] handling current node
	I0110 08:54:38.767818       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0110 08:54:38.767853       1 main.go:301] handling current node
	I0110 08:54:48.776015       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0110 08:54:48.776073       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1040eb4ed6b67bd13c53d3da67a4af5ac0ef2ecbedc7b475669549f60d144fcf] <==
	I0110 08:53:57.241399       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0110 08:53:57.240886       1 aggregator.go:187] initial CRD sync complete...
	I0110 08:53:57.242150       1 autoregister_controller.go:144] Starting autoregister controller
	I0110 08:53:57.242159       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0110 08:53:57.242171       1 cache.go:39] Caches are synced for autoregister controller
	I0110 08:53:57.241412       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0110 08:53:57.241485       1 cache.go:39] Caches are synced for LocalAvailability controller
	E0110 08:53:57.248437       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0110 08:53:57.249704       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0110 08:53:57.293728       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0110 08:53:57.301594       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:57.301628       1 policy_source.go:248] refreshing policies
	I0110 08:53:57.311115       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 08:53:57.496180       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 08:53:57.527425       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 08:53:57.547385       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 08:53:57.553888       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 08:53:57.560933       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 08:53:57.591647       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.100.207"}
	I0110 08:53:57.601654       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.74.154"}
	I0110 08:53:58.143468       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 08:54:00.791506       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 08:54:00.942039       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 08:54:01.042163       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 08:54:01.042163       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [558dea2141d207b13cc98352cdf540b108631b3833c7fa7d623fd9a61e3b7c49] <==
	I0110 08:54:00.398264       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:00.398368       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:00.398414       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:00.398990       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:00.399082       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:00.399128       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:00.399158       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:00.399192       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:00.399246       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:00.399291       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:00.399472       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:00.399495       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:00.399597       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I0110 08:54:00.399679       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="embed-certs-072273"
	I0110 08:54:00.399774       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:00.399794       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0110 08:54:00.399985       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:00.400068       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:00.400124       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:00.401232       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:54:00.421487       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:00.500304       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:00.500344       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 08:54:00.500354       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 08:54:00.501413       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [6e898f22f40fc95e7eeb2a12ad036b9e422b83a678456d0532797f52906ab60d] <==
	I0110 08:53:58.435586       1 server_linux.go:53] "Using iptables proxy"
	I0110 08:53:58.515455       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:53:58.615636       1 shared_informer.go:377] "Caches are synced"
	I0110 08:53:58.615689       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E0110 08:53:58.615833       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 08:53:58.634346       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 08:53:58.634394       1 server_linux.go:136] "Using iptables Proxier"
	I0110 08:53:58.640064       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 08:53:58.640450       1 server.go:529] "Version info" version="v1.35.0"
	I0110 08:53:58.640476       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 08:53:58.641900       1 config.go:309] "Starting node config controller"
	I0110 08:53:58.641985       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 08:53:58.641919       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 08:53:58.642003       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 08:53:58.642006       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 08:53:58.641937       1 config.go:200] "Starting service config controller"
	I0110 08:53:58.642016       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 08:53:58.641949       1 config.go:106] "Starting endpoint slice config controller"
	I0110 08:53:58.642033       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 08:53:58.742813       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 08:53:58.742835       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0110 08:53:58.742886       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [4a32fec5d204fdc43d30ee63af5aecc23eab97460b3c2aa63f91be2d5b60a396] <==
	I0110 08:53:55.804810       1 serving.go:386] Generated self-signed cert in-memory
	W0110 08:53:57.172957       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0110 08:53:57.172995       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0110 08:53:57.173006       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0110 08:53:57.173016       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0110 08:53:57.222498       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0110 08:53:57.222538       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 08:53:57.225491       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 08:53:57.225521       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:53:57.225775       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0110 08:53:57.225851       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0110 08:53:57.326323       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 08:54:14 embed-certs-072273 kubelet[741]: E0110 08:54:14.094866     741 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-embed-certs-072273" containerName="etcd"
	Jan 10 08:54:17 embed-certs-072273 kubelet[741]: E0110 08:54:17.024832     741 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-v6n5t" containerName="dashboard-metrics-scraper"
	Jan 10 08:54:17 embed-certs-072273 kubelet[741]: I0110 08:54:17.024876     741 scope.go:122] "RemoveContainer" containerID="c347cd3d0abc07d28d0f9d64612398cd097ef69430d4a97883530b2a6634e2a0"
	Jan 10 08:54:17 embed-certs-072273 kubelet[741]: I0110 08:54:17.105794     741 scope.go:122] "RemoveContainer" containerID="c347cd3d0abc07d28d0f9d64612398cd097ef69430d4a97883530b2a6634e2a0"
	Jan 10 08:54:17 embed-certs-072273 kubelet[741]: E0110 08:54:17.106042     741 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-v6n5t" containerName="dashboard-metrics-scraper"
	Jan 10 08:54:17 embed-certs-072273 kubelet[741]: I0110 08:54:17.106087     741 scope.go:122] "RemoveContainer" containerID="c0ccad35d826145b9aba7700905411a883fd04ab9b315fa64e870f7a145a5834"
	Jan 10 08:54:17 embed-certs-072273 kubelet[741]: E0110 08:54:17.106306     741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-v6n5t_kubernetes-dashboard(d3b8021c-8b89-489d-8a02-b1372816dce5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-v6n5t" podUID="d3b8021c-8b89-489d-8a02-b1372816dce5"
	Jan 10 08:54:19 embed-certs-072273 kubelet[741]: E0110 08:54:19.327558     741 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-v6n5t" containerName="dashboard-metrics-scraper"
	Jan 10 08:54:19 embed-certs-072273 kubelet[741]: I0110 08:54:19.327605     741 scope.go:122] "RemoveContainer" containerID="c0ccad35d826145b9aba7700905411a883fd04ab9b315fa64e870f7a145a5834"
	Jan 10 08:54:19 embed-certs-072273 kubelet[741]: E0110 08:54:19.327828     741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-v6n5t_kubernetes-dashboard(d3b8021c-8b89-489d-8a02-b1372816dce5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-v6n5t" podUID="d3b8021c-8b89-489d-8a02-b1372816dce5"
	Jan 10 08:54:29 embed-certs-072273 kubelet[741]: I0110 08:54:29.144130     741 scope.go:122] "RemoveContainer" containerID="aad4af12920741da7d171740026ff015c4070b3351587fdb2871778887f3c572"
	Jan 10 08:54:30 embed-certs-072273 kubelet[741]: E0110 08:54:30.051191     741 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-ss4nt" containerName="coredns"
	Jan 10 08:54:38 embed-certs-072273 kubelet[741]: E0110 08:54:38.024618     741 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-v6n5t" containerName="dashboard-metrics-scraper"
	Jan 10 08:54:38 embed-certs-072273 kubelet[741]: I0110 08:54:38.024671     741 scope.go:122] "RemoveContainer" containerID="c0ccad35d826145b9aba7700905411a883fd04ab9b315fa64e870f7a145a5834"
	Jan 10 08:54:38 embed-certs-072273 kubelet[741]: I0110 08:54:38.174280     741 scope.go:122] "RemoveContainer" containerID="c0ccad35d826145b9aba7700905411a883fd04ab9b315fa64e870f7a145a5834"
	Jan 10 08:54:38 embed-certs-072273 kubelet[741]: E0110 08:54:38.174551     741 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-v6n5t" containerName="dashboard-metrics-scraper"
	Jan 10 08:54:38 embed-certs-072273 kubelet[741]: I0110 08:54:38.174594     741 scope.go:122] "RemoveContainer" containerID="14dbb08e06af9263af7a59178ddca46dd2574ee6c2dfb71f83e5e8a82e8357a0"
	Jan 10 08:54:38 embed-certs-072273 kubelet[741]: E0110 08:54:38.174829     741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-v6n5t_kubernetes-dashboard(d3b8021c-8b89-489d-8a02-b1372816dce5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-v6n5t" podUID="d3b8021c-8b89-489d-8a02-b1372816dce5"
	Jan 10 08:54:39 embed-certs-072273 kubelet[741]: E0110 08:54:39.327329     741 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-v6n5t" containerName="dashboard-metrics-scraper"
	Jan 10 08:54:39 embed-certs-072273 kubelet[741]: I0110 08:54:39.327390     741 scope.go:122] "RemoveContainer" containerID="14dbb08e06af9263af7a59178ddca46dd2574ee6c2dfb71f83e5e8a82e8357a0"
	Jan 10 08:54:39 embed-certs-072273 kubelet[741]: E0110 08:54:39.327605     741 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-v6n5t_kubernetes-dashboard(d3b8021c-8b89-489d-8a02-b1372816dce5)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-v6n5t" podUID="d3b8021c-8b89-489d-8a02-b1372816dce5"
	Jan 10 08:54:43 embed-certs-072273 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 08:54:43 embed-certs-072273 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 08:54:43 embed-certs-072273 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 08:54:43 embed-certs-072273 systemd[1]: kubelet.service: Consumed 1.706s CPU time.
	
	
	==> kubernetes-dashboard [fd0a8039f1273e5dcd77a9bb5b599799ac405a5ed278be2b9f5d5ec63dec3721] <==
	2026/01/10 08:54:06 Using namespace: kubernetes-dashboard
	2026/01/10 08:54:06 Using in-cluster config to connect to apiserver
	2026/01/10 08:54:06 Using secret token for csrf signing
	2026/01/10 08:54:06 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/10 08:54:06 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/10 08:54:06 Successful initial request to the apiserver, version: v1.35.0
	2026/01/10 08:54:06 Generating JWE encryption key
	2026/01/10 08:54:06 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/10 08:54:06 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/10 08:54:06 Initializing JWE encryption key from synchronized object
	2026/01/10 08:54:06 Creating in-cluster Sidecar client
	2026/01/10 08:54:06 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 08:54:06 Serving insecurely on HTTP port: 9090
	2026/01/10 08:54:36 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 08:54:06 Starting overwatch
	
	
	==> storage-provisioner [aa0ac14c84306d08705b88254a7251d7ce9bb604fb21d63d3bc416ef60ad94aa] <==
	I0110 08:54:29.222329       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 08:54:29.234538       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 08:54:29.234671       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0110 08:54:29.237374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:32.693310       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:36.953796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:40.553200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:43.607573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:46.630336       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:46.635256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 08:54:46.635412       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 08:54:46.635486       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"77ff52c0-7d74-49c1-b5d8-f06214a410f8", APIVersion:"v1", ResourceVersion:"641", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-072273_5515f0bc-fa38-41b2-9348-5f72557a8c83 became leader
	I0110 08:54:46.635637       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-072273_5515f0bc-fa38-41b2-9348-5f72557a8c83!
	W0110 08:54:46.637565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:46.641138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 08:54:46.736156       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-072273_5515f0bc-fa38-41b2-9348-5f72557a8c83!
	W0110 08:54:48.644693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:48.649968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [aad4af12920741da7d171740026ff015c4070b3351587fdb2871778887f3c572] <==
	I0110 08:53:58.403114       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0110 08:54:28.406104       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-072273 -n embed-certs-072273
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-072273 -n embed-certs-072273: exit status 2 (344.197624ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-072273 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.08s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-582650 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-582650 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (286.672236ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:54:55Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-582650 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-582650
helpers_test.go:244: (dbg) docker inspect newest-cni-582650:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4dbf07d4b1628a1acd5c2e257abf85d859efa39216a44d32788ff5dc3a82de51",
	        "Created": "2026-01-10T08:54:34.771145794Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 330856,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T08:54:34.809442856Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e8ee619afcf8e9008c4fe3e7f2aba5fdbe7a9f0765053ca7dd53ab0df8ab02a5",
	        "ResolvConfPath": "/var/lib/docker/containers/4dbf07d4b1628a1acd5c2e257abf85d859efa39216a44d32788ff5dc3a82de51/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4dbf07d4b1628a1acd5c2e257abf85d859efa39216a44d32788ff5dc3a82de51/hostname",
	        "HostsPath": "/var/lib/docker/containers/4dbf07d4b1628a1acd5c2e257abf85d859efa39216a44d32788ff5dc3a82de51/hosts",
	        "LogPath": "/var/lib/docker/containers/4dbf07d4b1628a1acd5c2e257abf85d859efa39216a44d32788ff5dc3a82de51/4dbf07d4b1628a1acd5c2e257abf85d859efa39216a44d32788ff5dc3a82de51-json.log",
	        "Name": "/newest-cni-582650",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-582650:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-582650",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4dbf07d4b1628a1acd5c2e257abf85d859efa39216a44d32788ff5dc3a82de51",
	                "LowerDir": "/var/lib/docker/overlay2/cecf9ccbd369e95c2f1fad3e86ddbefa88377e415cb790180c787df246182877-init/diff:/var/lib/docker/overlay2/97b14eb520192356c991915ce74cd6094e6ef948f398ac688b667ea18491ccde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cecf9ccbd369e95c2f1fad3e86ddbefa88377e415cb790180c787df246182877/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cecf9ccbd369e95c2f1fad3e86ddbefa88377e415cb790180c787df246182877/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cecf9ccbd369e95c2f1fad3e86ddbefa88377e415cb790180c787df246182877/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-582650",
	                "Source": "/var/lib/docker/volumes/newest-cni-582650/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-582650",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-582650",
	                "name.minikube.sigs.k8s.io": "newest-cni-582650",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "e9d9ce7fb2d3408f08ab0aeb0927ccecdec96fe758681840fa2e0af404bc1d41",
	            "SandboxKey": "/var/run/docker/netns/e9d9ce7fb2d3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-582650": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "075f874b6857901f9e3f8b443cec464881c99fdb29213454b1860411dcc7e5ce",
	                    "EndpointID": "c273a080f81836df35f164a9f116f4c685cb7686b155ec619fb78956b4b765c9",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "e6:cf:6e:82:ec:92",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-582650",
	                        "4dbf07d4b162"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-582650 -n newest-cni-582650
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-582650 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │              PROFILE              │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ start   │ -p embed-certs-072273 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                                        │ embed-certs-072273                │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:54 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-225354 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-225354      │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-225354 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-225354      │ jenkins │ v1.37.0 │ 10 Jan 26 08:53 UTC │ 10 Jan 26 08:54 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-225354 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-225354      │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p default-k8s-diff-port-225354 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0                                                                      │ default-k8s-diff-port-225354      │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ image   │ old-k8s-version-093083 image list --format=json                                                                                                                                                                                               │ old-k8s-version-093083            │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ pause   │ -p old-k8s-version-093083 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-093083            │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ delete  │ -p old-k8s-version-093083                                                                                                                                                                                                                     │ old-k8s-version-093083            │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ image   │ no-preload-095312 image list --format=json                                                                                                                                                                                                    │ no-preload-095312                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ pause   │ -p no-preload-095312 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-095312                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ delete  │ -p old-k8s-version-093083                                                                                                                                                                                                                     │ old-k8s-version-093083            │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p newest-cni-582650 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-582650                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ delete  │ -p no-preload-095312                                                                                                                                                                                                                          │ no-preload-095312                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ delete  │ -p no-preload-095312                                                                                                                                                                                                                          │ no-preload-095312                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p test-preload-dl-gcs-424382 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                        │ test-preload-dl-gcs-424382        │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-424382                                                                                                                                                                                                                 │ test-preload-dl-gcs-424382        │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p test-preload-dl-github-434342 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                  │ test-preload-dl-github-434342     │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ image   │ embed-certs-072273 image list --format=json                                                                                                                                                                                                   │ embed-certs-072273                │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ pause   │ -p embed-certs-072273 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-072273                │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ delete  │ -p test-preload-dl-github-434342                                                                                                                                                                                                              │ test-preload-dl-github-434342     │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p test-preload-dl-gcs-cached-077581 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                 │ test-preload-dl-gcs-cached-077581 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-cached-077581                                                                                                                                                                                                          │ test-preload-dl-gcs-cached-077581 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ delete  │ -p embed-certs-072273                                                                                                                                                                                                                         │ embed-certs-072273                │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ delete  │ -p embed-certs-072273                                                                                                                                                                                                                         │ embed-certs-072273                │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ addons  │ enable metrics-server -p newest-cni-582650 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-582650                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 08:54:45
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 08:54:45.000155  333458 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:54:45.000671  333458 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:54:45.000704  333458 out.go:374] Setting ErrFile to fd 2...
	I0110 08:54:45.000710  333458 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:54:45.001160  333458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:54:45.002107  333458 out.go:368] Setting JSON to false
	I0110 08:54:45.003853  333458 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2237,"bootTime":1768033048,"procs":348,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 08:54:45.003933  333458 start.go:143] virtualization: kvm guest
	I0110 08:54:45.007843  333458 out.go:179] * [test-preload-dl-gcs-cached-077581] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 08:54:45.009134  333458 notify.go:221] Checking for updates...
	I0110 08:54:45.012013  333458 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 08:54:45.013307  333458 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 08:54:45.014768  333458 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:54:45.016885  333458 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-3641/.minikube
	I0110 08:54:45.018787  333458 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 08:54:45.020498  333458 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 08:54:45.022964  333458 config.go:182] Loaded profile config "default-k8s-diff-port-225354": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:54:45.023105  333458 config.go:182] Loaded profile config "embed-certs-072273": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:54:45.023234  333458 config.go:182] Loaded profile config "newest-cni-582650": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:54:45.023350  333458 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 08:54:45.056715  333458 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 08:54:45.056928  333458 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:54:45.133422  333458 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2026-01-10 08:54:45.121523334 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:54:45.133572  333458 docker.go:319] overlay module found
	I0110 08:54:45.136267  333458 out.go:179] * Using the docker driver based on user configuration
	I0110 08:54:45.137791  333458 start.go:309] selected driver: docker
	I0110 08:54:45.137810  333458 start.go:928] validating driver "docker" against <nil>
	I0110 08:54:45.137930  333458 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:54:45.210058  333458 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:75 SystemTime:2026-01-10 08:54:45.199476457 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:54:45.210241  333458 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 08:54:45.210765  333458 start_flags.go:417] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0110 08:54:45.210947  333458 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0110 08:54:45.212704  333458 out.go:179] * Using Docker driver with root privileges
	I0110 08:54:45.213902  333458 cni.go:84] Creating CNI manager for ""
	I0110 08:54:45.213969  333458 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 08:54:45.213984  333458 start_flags.go:342] Found "CNI" CNI - setting NetworkPlugin=cni
	I0110 08:54:45.214041  333458 start.go:353] cluster config:
	{Name:test-preload-dl-gcs-cached-077581 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0-rc.2 ClusterName:test-preload-dl-gcs-cached-077581 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s
Rosetta:false}
	I0110 08:54:45.215615  333458 out.go:179] * Starting "test-preload-dl-gcs-cached-077581" primary control-plane node in "test-preload-dl-gcs-cached-077581" cluster
	I0110 08:54:45.216853  333458 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 08:54:45.218072  333458 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 08:54:45.219223  333458 preload.go:188] Checking if preload exists for k8s version v1.34.0-rc.2 and runtime crio
	I0110 08:54:45.219265  333458 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0110 08:54:45.219292  333458 cache.go:65] Caching tarball of preloaded images
	I0110 08:54:45.219319  333458 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 08:54:45.219396  333458 preload.go:251] Found /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0110 08:54:45.219413  333458 cache.go:68] Finished verifying existence of preloaded tar for v1.34.0-rc.2 on crio
	I0110 08:54:45.219609  333458 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/test-preload-dl-gcs-cached-077581/config.json ...
	I0110 08:54:45.219635  333458 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/test-preload-dl-gcs-cached-077581/config.json: {Name:mkd355fe097cb192d4434fb02c2d35e19a6d11db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:54:45.219816  333458 preload.go:188] Checking if preload exists for k8s version v1.34.0-rc.2 and runtime crio
	I0110 08:54:45.219899  333458 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0-rc.2/bin/linux/amd64/kubectl.sha256
	I0110 08:54:45.244883  333458 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 08:54:45.244911  333458 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 to local cache
	I0110 08:54:45.244999  333458 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local cache directory
	I0110 08:54:45.245018  333458 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local cache directory, skipping pull
	I0110 08:54:45.245023  333458 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in cache, skipping pull
	I0110 08:54:45.245032  333458 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 as a tarball
	I0110 08:54:45.245047  333458 cache.go:243] Successfully downloaded all kic artifacts
	I0110 08:54:45.246779  333458 out.go:179] * Download complete!
	W0110 08:54:41.649436  323767 pod_ready.go:104] pod "coredns-7d764666f9-cjklg" is not "Ready", error: <nil>
	W0110 08:54:43.650131  323767 pod_ready.go:104] pod "coredns-7d764666f9-cjklg" is not "Ready", error: <nil>
	I0110 08:54:49.844941  328774 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0
	I0110 08:54:49.845018  328774 kubeadm.go:319] [preflight] Running pre-flight checks
	I0110 08:54:49.845123  328774 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I0110 08:54:49.845203  328774 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I0110 08:54:49.845244  328774 kubeadm.go:319] OS: Linux
	I0110 08:54:49.845310  328774 kubeadm.go:319] CGROUPS_CPU: enabled
	I0110 08:54:49.845374  328774 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I0110 08:54:49.845437  328774 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I0110 08:54:49.845507  328774 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I0110 08:54:49.845580  328774 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I0110 08:54:49.845620  328774 kubeadm.go:319] CGROUPS_PIDS: enabled
	I0110 08:54:49.845664  328774 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I0110 08:54:49.845729  328774 kubeadm.go:319] CGROUPS_IO: enabled
	I0110 08:54:49.845855  328774 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0110 08:54:49.845985  328774 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0110 08:54:49.846140  328774 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0110 08:54:49.846231  328774 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0110 08:54:49.847899  328774 out.go:252]   - Generating certificates and keys ...
	I0110 08:54:49.847972  328774 kubeadm.go:319] [certs] Using existing ca certificate authority
	I0110 08:54:49.848062  328774 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I0110 08:54:49.848172  328774 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0110 08:54:49.848257  328774 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I0110 08:54:49.848349  328774 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I0110 08:54:49.848428  328774 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I0110 08:54:49.848508  328774 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I0110 08:54:49.848687  328774 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-582650] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0110 08:54:49.848779  328774 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I0110 08:54:49.848938  328774 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-582650] and IPs [192.168.94.2 127.0.0.1 ::1]
	I0110 08:54:49.849037  328774 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0110 08:54:49.849132  328774 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I0110 08:54:49.849200  328774 kubeadm.go:319] [certs] Generating "sa" key and public key
	I0110 08:54:49.849298  328774 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0110 08:54:49.849369  328774 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0110 08:54:49.849456  328774 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0110 08:54:49.849540  328774 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0110 08:54:49.849633  328774 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0110 08:54:49.849705  328774 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0110 08:54:49.849822  328774 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0110 08:54:49.849908  328774 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0110 08:54:49.851166  328774 out.go:252]   - Booting up control plane ...
	I0110 08:54:49.851250  328774 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0110 08:54:49.851342  328774 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0110 08:54:49.851439  328774 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0110 08:54:49.851566  328774 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0110 08:54:49.851676  328774 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0110 08:54:49.851841  328774 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0110 08:54:49.851969  328774 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0110 08:54:49.852013  328774 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I0110 08:54:49.852226  328774 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0110 08:54:49.852361  328774 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0110 08:54:49.852414  328774 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.843212ms
	I0110 08:54:49.852497  328774 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0110 08:54:49.852579  328774 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I0110 08:54:49.852710  328774 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0110 08:54:49.852822  328774 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0110 08:54:49.852899  328774 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.005763328s
	I0110 08:54:49.852979  328774 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.975720222s
	I0110 08:54:49.853067  328774 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.001677791s
	I0110 08:54:49.853181  328774 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0110 08:54:49.853346  328774 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0110 08:54:49.853448  328774 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I0110 08:54:49.853721  328774 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-582650 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0110 08:54:49.853817  328774 kubeadm.go:319] [bootstrap-token] Using token: z1tqj5.jd7xqex7hohaj9sl
	I0110 08:54:49.855355  328774 out.go:252]   - Configuring RBAC rules ...
	I0110 08:54:49.855483  328774 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0110 08:54:49.855587  328774 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0110 08:54:49.855774  328774 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0110 08:54:49.855955  328774 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0110 08:54:49.856092  328774 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0110 08:54:49.856205  328774 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0110 08:54:49.856331  328774 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0110 08:54:49.856389  328774 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I0110 08:54:49.856449  328774 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I0110 08:54:49.856458  328774 kubeadm.go:319] 
	I0110 08:54:49.856514  328774 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I0110 08:54:49.856523  328774 kubeadm.go:319] 
	I0110 08:54:49.856626  328774 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I0110 08:54:49.856643  328774 kubeadm.go:319] 
	I0110 08:54:49.856678  328774 kubeadm.go:319]   mkdir -p $HOME/.kube
	I0110 08:54:49.856829  328774 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0110 08:54:49.856931  328774 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0110 08:54:49.856945  328774 kubeadm.go:319] 
	I0110 08:54:49.857010  328774 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I0110 08:54:49.857020  328774 kubeadm.go:319] 
	I0110 08:54:49.857067  328774 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0110 08:54:49.857073  328774 kubeadm.go:319] 
	I0110 08:54:49.857141  328774 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I0110 08:54:49.857240  328774 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0110 08:54:49.857356  328774 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0110 08:54:49.857366  328774 kubeadm.go:319] 
	I0110 08:54:49.857438  328774 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I0110 08:54:49.857544  328774 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I0110 08:54:49.857551  328774 kubeadm.go:319] 
	I0110 08:54:49.857668  328774 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token z1tqj5.jd7xqex7hohaj9sl \
	I0110 08:54:49.857821  328774 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f746eb27466bc6381c15f46a92d9a9e5cdeed2008acae9cc29658e7541168248 \
	I0110 08:54:49.857850  328774 kubeadm.go:319] 	--control-plane 
	I0110 08:54:49.857856  328774 kubeadm.go:319] 
	I0110 08:54:49.857933  328774 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I0110 08:54:49.857939  328774 kubeadm.go:319] 
	I0110 08:54:49.858007  328774 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token z1tqj5.jd7xqex7hohaj9sl \
	I0110 08:54:49.858116  328774 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:f746eb27466bc6381c15f46a92d9a9e5cdeed2008acae9cc29658e7541168248 
	I0110 08:54:49.858129  328774 cni.go:84] Creating CNI manager for ""
	I0110 08:54:49.858136  328774 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 08:54:49.859726  328774 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W0110 08:54:46.152909  323767 pod_ready.go:104] pod "coredns-7d764666f9-cjklg" is not "Ready", error: <nil>
	W0110 08:54:48.649298  323767 pod_ready.go:104] pod "coredns-7d764666f9-cjklg" is not "Ready", error: <nil>
	W0110 08:54:50.649341  323767 pod_ready.go:104] pod "coredns-7d764666f9-cjklg" is not "Ready", error: <nil>
	I0110 08:54:49.861111  328774 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0110 08:54:49.865521  328774 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0/kubectl ...
	I0110 08:54:49.865537  328774 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I0110 08:54:49.880432  328774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0110 08:54:50.097952  328774 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0110 08:54:50.098076  328774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:54:50.098102  328774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-582650 minikube.k8s.io/updated_at=2026_01_10T08_54_50_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee minikube.k8s.io/name=newest-cni-582650 minikube.k8s.io/primary=true
	I0110 08:54:50.111226  328774 ops.go:34] apiserver oom_adj: -16
	I0110 08:54:50.209580  328774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:54:50.710049  328774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:54:51.210025  328774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:54:51.710482  328774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:54:52.210663  328774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:54:52.710502  328774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:54:53.211396  328774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:54:53.710382  328774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:54:54.209872  328774 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0110 08:54:54.277027  328774 kubeadm.go:1114] duration metric: took 4.179017957s to wait for elevateKubeSystemPrivileges
	I0110 08:54:54.277059  328774 kubeadm.go:403] duration metric: took 13.284049348s to StartCluster
	I0110 08:54:54.277075  328774 settings.go:142] acquiring lock: {Name:mkbb32fc6441ceb31ce2923ea8999f8375298f08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:54:54.277132  328774 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:54:54.277994  328774 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/kubeconfig: {Name:mk0d29b3b0ee1fd71729aff31f901be50a2d0664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:54:54.278217  328774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0110 08:54:54.278230  328774 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 08:54:54.278302  328774 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-582650"
	I0110 08:54:54.278212  328774 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 08:54:54.278314  328774 addons.go:70] Setting default-storageclass=true in profile "newest-cni-582650"
	I0110 08:54:54.278326  328774 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-582650"
	I0110 08:54:54.278330  328774 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-582650"
	I0110 08:54:54.278382  328774 host.go:66] Checking if "newest-cni-582650" exists ...
	I0110 08:54:54.278422  328774 config.go:182] Loaded profile config "newest-cni-582650": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:54:54.278711  328774 cli_runner.go:164] Run: docker container inspect newest-cni-582650 --format={{.State.Status}}
	I0110 08:54:54.278954  328774 cli_runner.go:164] Run: docker container inspect newest-cni-582650 --format={{.State.Status}}
	I0110 08:54:54.280638  328774 out.go:179] * Verifying Kubernetes components...
	I0110 08:54:54.281856  328774 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:54:54.304610  328774 addons.go:239] Setting addon default-storageclass=true in "newest-cni-582650"
	I0110 08:54:54.304656  328774 host.go:66] Checking if "newest-cni-582650" exists ...
	I0110 08:54:54.305231  328774 cli_runner.go:164] Run: docker container inspect newest-cni-582650 --format={{.State.Status}}
	I0110 08:54:54.305497  328774 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 08:54:54.307328  328774 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 08:54:54.307350  328774 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 08:54:54.307412  328774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:54:54.334446  328774 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 08:54:54.334471  328774 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 08:54:54.334528  328774 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:54:54.337014  328774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/newest-cni-582650/id_rsa Username:docker}
	I0110 08:54:54.362230  328774 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/newest-cni-582650/id_rsa Username:docker}
	I0110 08:54:54.378580  328774 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0110 08:54:54.425825  328774 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 08:54:54.454175  328774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 08:54:54.476484  328774 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 08:54:54.561811  328774 start.go:987] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I0110 08:54:54.562844  328774 api_server.go:52] waiting for apiserver process to appear ...
	I0110 08:54:54.562899  328774 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 08:54:54.785122  328774 api_server.go:72] duration metric: took 506.781556ms to wait for apiserver process to appear ...
	I0110 08:54:54.785156  328774 api_server.go:88] waiting for apiserver healthz status ...
	I0110 08:54:54.785175  328774 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0110 08:54:54.790425  328774 api_server.go:325] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0110 08:54:54.791230  328774 api_server.go:141] control plane version: v1.35.0
	I0110 08:54:54.791253  328774 api_server.go:131] duration metric: took 6.091147ms to wait for apiserver health ...
	I0110 08:54:54.791261  328774 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 08:54:54.791528  328774 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0110 08:54:54.792743  328774 addons.go:530] duration metric: took 514.497197ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0110 08:54:54.794075  328774 system_pods.go:59] 8 kube-system pods found
	I0110 08:54:54.794100  328774 system_pods.go:61] "coredns-7d764666f9-bmscc" [bc0ad55b-bbf6-4898-a38a-7a1a2d154cb3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0110 08:54:54.794108  328774 system_pods.go:61] "etcd-newest-cni-582650" [bb439312-4d17-46e1-9d07-4b972ad2299b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 08:54:54.794115  328774 system_pods.go:61] "kindnet-gp4sj" [c1167720-98b8-4850-a264-11964eb2675d] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0110 08:54:54.794125  328774 system_pods.go:61] "kube-apiserver-newest-cni-582650" [947302b1-615d-4f31-976c-039fcf37be97] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 08:54:54.794133  328774 system_pods.go:61] "kube-controller-manager-newest-cni-582650" [c2156827-ae41-4c25-958a-ea329f7adf65] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 08:54:54.794147  328774 system_pods.go:61] "kube-proxy-ldmfv" [02b5ffbb-b52f-4339-bee2-b9400a4714bd] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0110 08:54:54.794154  328774 system_pods.go:61] "kube-scheduler-newest-cni-582650" [8d788728-c388-42a6-9bcd-9ab2bf3468fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 08:54:54.794159  328774 system_pods.go:61] "storage-provisioner" [349ec60d-a776-479e-b9a0-892989e886eb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0110 08:54:54.794167  328774 system_pods.go:74] duration metric: took 2.901567ms to wait for pod list to return data ...
	I0110 08:54:54.794179  328774 default_sa.go:34] waiting for default service account to be created ...
	I0110 08:54:54.796092  328774 default_sa.go:45] found service account: "default"
	I0110 08:54:54.796111  328774 default_sa.go:55] duration metric: took 1.926612ms for default service account to be created ...
	I0110 08:54:54.796125  328774 kubeadm.go:587] duration metric: took 517.791101ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0110 08:54:54.796146  328774 node_conditions.go:102] verifying NodePressure condition ...
	I0110 08:54:54.798132  328774 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0110 08:54:54.798161  328774 node_conditions.go:123] node cpu capacity is 8
	I0110 08:54:54.798174  328774 node_conditions.go:105] duration metric: took 2.02348ms to run NodePressure ...
	I0110 08:54:54.798183  328774 start.go:242] waiting for startup goroutines ...
	I0110 08:54:55.066248  328774 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-582650" context rescaled to 1 replicas
	I0110 08:54:55.066290  328774 start.go:247] waiting for cluster config update ...
	I0110 08:54:55.066304  328774 start.go:256] writing updated cluster config ...
	I0110 08:54:55.066635  328774 ssh_runner.go:195] Run: rm -f paused
	I0110 08:54:55.131386  328774 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0110 08:54:55.134149  328774 out.go:179] * Done! kubectl is now configured to use "newest-cni-582650" cluster and "default" namespace by default
	W0110 08:54:53.148842  323767 pod_ready.go:104] pod "coredns-7d764666f9-cjklg" is not "Ready", error: <nil>
	W0110 08:54:55.150698  323767 pod_ready.go:104] pod "coredns-7d764666f9-cjklg" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Jan 10 08:54:54 newest-cni-582650 crio[778]: time="2026-01-10T08:54:54.678916123Z" level=info msg="Ran pod sandbox 0eb3df150accb303e9a2e974bc6a890f4dab81f7c1ac070c75655d100e6f85f8 with infra container: kube-system/kube-proxy-ldmfv/POD" id=3ae1a118-1385-41ed-a9df-efd72b2b0ea1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 08:54:54 newest-cni-582650 crio[778]: time="2026-01-10T08:54:54.679610077Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=47a359bd-2b58-4fba-a183-7054dc34699a name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:54:54 newest-cni-582650 crio[778]: time="2026-01-10T08:54:54.679708415Z" level=info msg="Image docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88 not found" id=47a359bd-2b58-4fba-a183-7054dc34699a name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:54:54 newest-cni-582650 crio[778]: time="2026-01-10T08:54:54.679808723Z" level=info msg="Neither image nor artfiact docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88 found" id=47a359bd-2b58-4fba-a183-7054dc34699a name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:54:54 newest-cni-582650 crio[778]: time="2026-01-10T08:54:54.679874991Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=5df6e142-a72b-41eb-b379-db255fd8641d name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:54:54 newest-cni-582650 crio[778]: time="2026-01-10T08:54:54.680729064Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=162dad46-6a61-49d0-a302-1725940494f4 name=/runtime.v1.ImageService/PullImage
	Jan 10 08:54:54 newest-cni-582650 crio[778]: time="2026-01-10T08:54:54.680956041Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=52a238aa-58a2-4726-a154-08560cb36b52 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:54:54 newest-cni-582650 crio[778]: time="2026-01-10T08:54:54.681107656Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88\""
	Jan 10 08:54:54 newest-cni-582650 crio[778]: time="2026-01-10T08:54:54.684906792Z" level=info msg="Creating container: kube-system/kube-proxy-ldmfv/kube-proxy" id=0c14706f-45c5-47f9-a137-0bdf49d3e5f5 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:54:54 newest-cni-582650 crio[778]: time="2026-01-10T08:54:54.685018834Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:54 newest-cni-582650 crio[778]: time="2026-01-10T08:54:54.688954992Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:54 newest-cni-582650 crio[778]: time="2026-01-10T08:54:54.689361696Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:54 newest-cni-582650 crio[778]: time="2026-01-10T08:54:54.721388192Z" level=info msg="Created container ae14757854f6aec94a1144e0c070d5d42ec008efeb6782c4a5b37f439b78d857: kube-system/kube-proxy-ldmfv/kube-proxy" id=0c14706f-45c5-47f9-a137-0bdf49d3e5f5 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:54:54 newest-cni-582650 crio[778]: time="2026-01-10T08:54:54.722134582Z" level=info msg="Starting container: ae14757854f6aec94a1144e0c070d5d42ec008efeb6782c4a5b37f439b78d857" id=1a7f6417-66d9-44dc-9724-d46c0b2db77d name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 08:54:54 newest-cni-582650 crio[778]: time="2026-01-10T08:54:54.725512859Z" level=info msg="Started container" PID=1592 containerID=ae14757854f6aec94a1144e0c070d5d42ec008efeb6782c4a5b37f439b78d857 description=kube-system/kube-proxy-ldmfv/kube-proxy id=1a7f6417-66d9-44dc-9724-d46c0b2db77d name=/runtime.v1.RuntimeService/StartContainer sandboxID=0eb3df150accb303e9a2e974bc6a890f4dab81f7c1ac070c75655d100e6f85f8
	Jan 10 08:54:55 newest-cni-582650 crio[778]: time="2026-01-10T08:54:55.899933173Z" level=info msg="Pulled image: docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27" id=162dad46-6a61-49d0-a302-1725940494f4 name=/runtime.v1.ImageService/PullImage
	Jan 10 08:54:55 newest-cni-582650 crio[778]: time="2026-01-10T08:54:55.900770459Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=29990829-3e98-49c1-9053-94b21d6f2330 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:54:55 newest-cni-582650 crio[778]: time="2026-01-10T08:54:55.903120175Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=dae2edc4-dad1-4e2d-97fc-e849dd879a0c name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:54:55 newest-cni-582650 crio[778]: time="2026-01-10T08:54:55.90709136Z" level=info msg="Creating container: kube-system/kindnet-gp4sj/kindnet-cni" id=7da753f7-6d09-42a0-ab1c-0e3383dbc488 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:54:55 newest-cni-582650 crio[778]: time="2026-01-10T08:54:55.90717962Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:55 newest-cni-582650 crio[778]: time="2026-01-10T08:54:55.911182766Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:55 newest-cni-582650 crio[778]: time="2026-01-10T08:54:55.911650433Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:55 newest-cni-582650 crio[778]: time="2026-01-10T08:54:55.93861657Z" level=info msg="Created container 94f6811426e7f0775be1818c57b896c7e2148d7e1e93e50b501749b5f6b2f190: kube-system/kindnet-gp4sj/kindnet-cni" id=7da753f7-6d09-42a0-ab1c-0e3383dbc488 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:54:55 newest-cni-582650 crio[778]: time="2026-01-10T08:54:55.939322608Z" level=info msg="Starting container: 94f6811426e7f0775be1818c57b896c7e2148d7e1e93e50b501749b5f6b2f190" id=9661f3f2-da6d-4d3e-af46-f19a321b3e66 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 08:54:55 newest-cni-582650 crio[778]: time="2026-01-10T08:54:55.941601114Z" level=info msg="Started container" PID=1840 containerID=94f6811426e7f0775be1818c57b896c7e2148d7e1e93e50b501749b5f6b2f190 description=kube-system/kindnet-gp4sj/kindnet-cni id=9661f3f2-da6d-4d3e-af46-f19a321b3e66 name=/runtime.v1.RuntimeService/StartContainer sandboxID=29891ceadf987dce2fbdd1c114ba718b44f46efa8577c676eaa4f3e9e14f2603
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	94f6811426e7f       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27   Less than a second ago   Running             kindnet-cni               0                   29891ceadf987       kindnet-gp4sj                               kube-system
	ae14757854f6a       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                     1 second ago             Running             kube-proxy                0                   0eb3df150accb       kube-proxy-ldmfv                            kube-system
	438d478afd0fe       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                     11 seconds ago           Running             kube-apiserver            0                   ad6e310b32429       kube-apiserver-newest-cni-582650            kube-system
	e01d51074be9c       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                     11 seconds ago           Running             etcd                      0                   d897d808c19e1       etcd-newest-cni-582650                      kube-system
	ded359fc0e4a4       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                     11 seconds ago           Running             kube-scheduler            0                   46448dd16df95       kube-scheduler-newest-cni-582650            kube-system
	b6196ebc73b83       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                     11 seconds ago           Running             kube-controller-manager   0                   58479af8b70f2       kube-controller-manager-newest-cni-582650   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-582650
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-582650
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee
	                    minikube.k8s.io/name=newest-cni-582650
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T08_54_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 08:54:46 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-582650
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 08:54:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 08:54:49 +0000   Sat, 10 Jan 2026 08:54:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 08:54:49 +0000   Sat, 10 Jan 2026 08:54:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 08:54:49 +0000   Sat, 10 Jan 2026 08:54:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 10 Jan 2026 08:54:49 +0000   Sat, 10 Jan 2026 08:54:46 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-582650
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 73d0d7ed7bc783a051b563196960b035
	  System UUID:                31447831-8276-4e9c-bb29-38ef2ce553ce
	  Boot ID:                    7c12f492-59b6-440b-986c-741ece916a23
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-582650                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7s
	  kube-system                 kindnet-gp4sj                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-582650             250m (3%)     0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-controller-manager-newest-cni-582650    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-proxy-ldmfv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-582650             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-582650 event: Registered Node newest-cni-582650 in Controller
	
	
	==> dmesg <==
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 4a 03 46 88 66 4e 08 06
	[  +6.406566] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7a 4f c9 a1 45 f4 08 06
	[  +0.001429] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca 5b 5b a9 7c eb 08 06
	[ +24.002020] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7a 6f 49 7e 9d d1 08 06
	[  +0.000366] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e 3a ac 18 98 c9 08 06
	[Jan10 08:52] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4a 05 5c 86 cf df 08 06
	[  +0.000337] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 7a 4f c9 a1 45 f4 08 06
	[  +0.000602] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca 5b 5b a9 7c eb 08 06
	[  +0.739059] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e 32 81 dc 0a 7c 08 06
	[  +1.988700] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 6e ec e4 aa 3d 08 06
	[ +20.040962] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 ac f8 cf fb 84 08 06
	[  +0.000375] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 6e ec e4 aa 3d 08 06
	[  +0.000915] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 32 81 dc 0a 7c 08 06
	
	
	==> etcd [e01d51074be9cc7fa4192da17ecc9e461e3e1ba5cb29af90aafc37f8e9f00113] <==
	{"level":"info","ts":"2026-01-10T08:54:45.066023Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T08:54:45.356872Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"dfc97eb0aae75b33 is starting a new election at term 1"}
	{"level":"info","ts":"2026-01-10T08:54:45.356994Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"dfc97eb0aae75b33 became pre-candidate at term 1"}
	{"level":"info","ts":"2026-01-10T08:54:45.357051Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 1"}
	{"level":"info","ts":"2026-01-10T08:54:45.357070Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T08:54:45.357105Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"dfc97eb0aae75b33 became candidate at term 2"}
	{"level":"info","ts":"2026-01-10T08:54:45.357758Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2026-01-10T08:54:45.357815Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T08:54:45.357856Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2026-01-10T08:54:45.357872Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2026-01-10T08:54:45.358650Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:newest-cni-582650 ClientURLs:[https://192.168.94.2:2379]}","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T08:54:45.358706Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T08:54:45.358858Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T08:54:45.358906Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T08:54:45.359305Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T08:54:45.359382Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T08:54:45.359617Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T08:54:45.359779Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T08:54:45.359826Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2026-01-10T08:54:45.359890Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2026-01-10T08:54:45.360039Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T08:54:45.360090Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2026-01-10T08:54:45.360471Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T08:54:45.362903Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2026-01-10T08:54:45.363006Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 08:54:56 up 37 min,  0 user,  load average: 5.32, 4.32, 2.79
	Linux newest-cni-582650 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [94f6811426e7f0775be1818c57b896c7e2148d7e1e93e50b501749b5f6b2f190] <==
	I0110 08:54:56.134482       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 08:54:56.231215       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I0110 08:54:56.231429       1 main.go:148] setting mtu 1500 for CNI 
	I0110 08:54:56.231472       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 08:54:56.231501       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T08:54:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 08:54:56.434885       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 08:54:56.434956       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 08:54:56.434969       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 08:54:56.435221       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [438d478afd0fe231b863dcdf8ef073914dd1b211f97a1064c92f3acdb989633a] <==
	I0110 08:54:46.445378       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 08:54:46.445663       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0110 08:54:46.445781       1 default_servicecidr_controller.go:169] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	E0110 08:54:46.446880       1 controller.go:201] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0110 08:54:46.453949       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I0110 08:54:46.454181       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 08:54:46.460559       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 08:54:46.650527       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 08:54:47.348928       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I0110 08:54:47.356104       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I0110 08:54:47.356198       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 08:54:47.866965       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 08:54:47.909940       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 08:54:47.954579       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0110 08:54:47.960941       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I0110 08:54:47.961940       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 08:54:47.965758       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 08:54:48.393187       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 08:54:49.246319       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 08:54:49.255777       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0110 08:54:49.265080       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0110 08:54:53.943634       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 08:54:53.947352       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 08:54:54.342308       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0110 08:54:54.393578       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [b6196ebc73b83a73971a3c68779eae4c18e6b6c171e41fe950adbe838788f0d2] <==
	I0110 08:54:53.194062       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:53.194098       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I0110 08:54:53.194119       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:53.194240       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:53.194312       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:53.194332       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:53.194476       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:53.194489       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:53.194502       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:53.196918       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:53.196987       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:53.196972       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:53.197282       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:53.196994       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:53.196972       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:53.196954       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:53.197920       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:53.199821       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:53.200042       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:53.201332       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:54:53.202019       1 range_allocator.go:433] "Set node PodCIDR" node="newest-cni-582650" podCIDRs=["10.42.0.0/24"]
	I0110 08:54:53.294191       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:53.294211       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 08:54:53.294219       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 08:54:53.302509       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [ae14757854f6aec94a1144e0c070d5d42ec008efeb6782c4a5b37f439b78d857] <==
	I0110 08:54:54.777367       1 server_linux.go:53] "Using iptables proxy"
	I0110 08:54:54.854624       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:54:54.954845       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:54.954928       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E0110 08:54:54.955064       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 08:54:54.977092       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 08:54:54.977194       1 server_linux.go:136] "Using iptables Proxier"
	I0110 08:54:54.984560       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 08:54:54.985094       1 server.go:529] "Version info" version="v1.35.0"
	I0110 08:54:54.985133       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 08:54:54.987226       1 config.go:106] "Starting endpoint slice config controller"
	I0110 08:54:54.987803       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 08:54:54.987313       1 config.go:309] "Starting node config controller"
	I0110 08:54:54.987855       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 08:54:54.987862       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 08:54:54.987596       1 config.go:200] "Starting service config controller"
	I0110 08:54:54.987871       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 08:54:54.987572       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 08:54:54.987882       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 08:54:55.088675       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0110 08:54:55.088700       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0110 08:54:55.088758       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [ded359fc0e4a492d2144d45f9d14504e76a126baaa081deec4756b871654be75] <==
	E0110 08:54:46.407854       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0110 08:54:46.407948       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E0110 08:54:46.407961       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0110 08:54:46.408119       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E0110 08:54:46.408138       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0110 08:54:46.408126       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E0110 08:54:46.408224       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0110 08:54:46.408261       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0110 08:54:46.408270       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0110 08:54:46.408615       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0110 08:54:47.313180       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E0110 08:54:47.318548       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E0110 08:54:47.372960       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E0110 08:54:47.384143       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E0110 08:54:47.388089       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E0110 08:54:47.409836       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E0110 08:54:47.426235       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E0110 08:54:47.459523       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E0110 08:54:47.493987       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E0110 08:54:47.603149       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E0110 08:54:47.614364       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E0110 08:54:47.642709       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E0110 08:54:47.644658       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E0110 08:54:47.826329       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I0110 08:54:50.501156       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 08:54:50 newest-cni-582650 kubelet[1311]: I0110 08:54:50.149500    1311 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-582650" podStartSLOduration=1.149483184 podStartE2EDuration="1.149483184s" podCreationTimestamp="2026-01-10 08:54:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 08:54:50.14918095 +0000 UTC m=+1.158876880" watchObservedRunningTime="2026-01-10 08:54:50.149483184 +0000 UTC m=+1.159179108"
	Jan 10 08:54:50 newest-cni-582650 kubelet[1311]: I0110 08:54:50.186000    1311 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-582650" podStartSLOduration=1.185978616 podStartE2EDuration="1.185978616s" podCreationTimestamp="2026-01-10 08:54:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 08:54:50.162803811 +0000 UTC m=+1.172499743" watchObservedRunningTime="2026-01-10 08:54:50.185978616 +0000 UTC m=+1.195674558"
	Jan 10 08:54:50 newest-cni-582650 kubelet[1311]: I0110 08:54:50.186752    1311 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-582650" podStartSLOduration=3.186722988 podStartE2EDuration="3.186722988s" podCreationTimestamp="2026-01-10 08:54:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 08:54:50.186323435 +0000 UTC m=+1.196019360" watchObservedRunningTime="2026-01-10 08:54:50.186722988 +0000 UTC m=+1.196418925"
	Jan 10 08:54:50 newest-cni-582650 kubelet[1311]: I0110 08:54:50.205292    1311 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-582650" podStartSLOduration=1.205272751 podStartE2EDuration="1.205272751s" podCreationTimestamp="2026-01-10 08:54:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 08:54:50.204871577 +0000 UTC m=+1.214567507" watchObservedRunningTime="2026-01-10 08:54:50.205272751 +0000 UTC m=+1.214968679"
	Jan 10 08:54:51 newest-cni-582650 kubelet[1311]: E0110 08:54:51.108227    1311 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-582650" containerName="kube-controller-manager"
	Jan 10 08:54:51 newest-cni-582650 kubelet[1311]: E0110 08:54:51.108301    1311 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-582650" containerName="etcd"
	Jan 10 08:54:51 newest-cni-582650 kubelet[1311]: E0110 08:54:51.108461    1311 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-582650" containerName="kube-scheduler"
	Jan 10 08:54:51 newest-cni-582650 kubelet[1311]: E0110 08:54:51.108560    1311 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-582650" containerName="kube-apiserver"
	Jan 10 08:54:52 newest-cni-582650 kubelet[1311]: E0110 08:54:52.110060    1311 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-582650" containerName="kube-apiserver"
	Jan 10 08:54:52 newest-cni-582650 kubelet[1311]: E0110 08:54:52.110243    1311 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-582650" containerName="kube-scheduler"
	Jan 10 08:54:52 newest-cni-582650 kubelet[1311]: E0110 08:54:52.742182    1311 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-582650" containerName="etcd"
	Jan 10 08:54:52 newest-cni-582650 kubelet[1311]: E0110 08:54:52.762031    1311 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-582650" containerName="kube-controller-manager"
	Jan 10 08:54:53 newest-cni-582650 kubelet[1311]: I0110 08:54:53.209871    1311 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Jan 10 08:54:53 newest-cni-582650 kubelet[1311]: I0110 08:54:53.210866    1311 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Jan 10 08:54:53 newest-cni-582650 kubelet[1311]: E0110 08:54:53.949363    1311 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-582650" containerName="kube-apiserver"
	Jan 10 08:54:54 newest-cni-582650 kubelet[1311]: I0110 08:54:54.408536    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/02b5ffbb-b52f-4339-bee2-b9400a4714bd-kube-proxy\") pod \"kube-proxy-ldmfv\" (UID: \"02b5ffbb-b52f-4339-bee2-b9400a4714bd\") " pod="kube-system/kube-proxy-ldmfv"
	Jan 10 08:54:54 newest-cni-582650 kubelet[1311]: I0110 08:54:54.408593    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c1167720-98b8-4850-a264-11964eb2675d-cni-cfg\") pod \"kindnet-gp4sj\" (UID: \"c1167720-98b8-4850-a264-11964eb2675d\") " pod="kube-system/kindnet-gp4sj"
	Jan 10 08:54:54 newest-cni-582650 kubelet[1311]: I0110 08:54:54.408625    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snzhf\" (UniqueName: \"kubernetes.io/projected/02b5ffbb-b52f-4339-bee2-b9400a4714bd-kube-api-access-snzhf\") pod \"kube-proxy-ldmfv\" (UID: \"02b5ffbb-b52f-4339-bee2-b9400a4714bd\") " pod="kube-system/kube-proxy-ldmfv"
	Jan 10 08:54:54 newest-cni-582650 kubelet[1311]: I0110 08:54:54.408657    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02b5ffbb-b52f-4339-bee2-b9400a4714bd-xtables-lock\") pod \"kube-proxy-ldmfv\" (UID: \"02b5ffbb-b52f-4339-bee2-b9400a4714bd\") " pod="kube-system/kube-proxy-ldmfv"
	Jan 10 08:54:54 newest-cni-582650 kubelet[1311]: I0110 08:54:54.408678    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02b5ffbb-b52f-4339-bee2-b9400a4714bd-lib-modules\") pod \"kube-proxy-ldmfv\" (UID: \"02b5ffbb-b52f-4339-bee2-b9400a4714bd\") " pod="kube-system/kube-proxy-ldmfv"
	Jan 10 08:54:54 newest-cni-582650 kubelet[1311]: I0110 08:54:54.408696    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1167720-98b8-4850-a264-11964eb2675d-xtables-lock\") pod \"kindnet-gp4sj\" (UID: \"c1167720-98b8-4850-a264-11964eb2675d\") " pod="kube-system/kindnet-gp4sj"
	Jan 10 08:54:54 newest-cni-582650 kubelet[1311]: I0110 08:54:54.408785    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1167720-98b8-4850-a264-11964eb2675d-lib-modules\") pod \"kindnet-gp4sj\" (UID: \"c1167720-98b8-4850-a264-11964eb2675d\") " pod="kube-system/kindnet-gp4sj"
	Jan 10 08:54:54 newest-cni-582650 kubelet[1311]: I0110 08:54:54.408836    1311 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9wjg\" (UniqueName: \"kubernetes.io/projected/c1167720-98b8-4850-a264-11964eb2675d-kube-api-access-p9wjg\") pod \"kindnet-gp4sj\" (UID: \"c1167720-98b8-4850-a264-11964eb2675d\") " pod="kube-system/kindnet-gp4sj"
	Jan 10 08:54:55 newest-cni-582650 kubelet[1311]: I0110 08:54:55.134838    1311 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-ldmfv" podStartSLOduration=1.134712542 podStartE2EDuration="1.134712542s" podCreationTimestamp="2026-01-10 08:54:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-10 08:54:55.134517925 +0000 UTC m=+6.144213853" watchObservedRunningTime="2026-01-10 08:54:55.134712542 +0000 UTC m=+6.144408472"
	Jan 10 08:54:56 newest-cni-582650 kubelet[1311]: I0110 08:54:56.137277    1311 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-gp4sj" podStartSLOduration=0.915597093 podStartE2EDuration="2.137262398s" podCreationTimestamp="2026-01-10 08:54:54 +0000 UTC" firstStartedPulling="2026-01-10 08:54:54.680328715 +0000 UTC m=+5.690024646" lastFinishedPulling="2026-01-10 08:54:55.901993992 +0000 UTC m=+6.911689951" observedRunningTime="2026-01-10 08:54:56.137207988 +0000 UTC m=+7.146903919" watchObservedRunningTime="2026-01-10 08:54:56.137262398 +0000 UTC m=+7.146958326"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-582650 -n newest-cni-582650
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-582650 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-bmscc storage-provisioner
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-582650 describe pod coredns-7d764666f9-bmscc storage-provisioner
E0110 08:54:57.227799    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/auto-472660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-582650 describe pod coredns-7d764666f9-bmscc storage-provisioner: exit status 1 (59.423746ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-bmscc" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-582650 describe pod coredns-7d764666f9-bmscc storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-225354 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-225354 --alsologtostderr -v=1: exit status 80 (2.509976616s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-225354 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:55:09.319501  339341 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:55:09.319626  339341 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:55:09.319637  339341 out.go:374] Setting ErrFile to fd 2...
	I0110 08:55:09.319644  339341 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:55:09.319981  339341 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:55:09.320343  339341 out.go:368] Setting JSON to false
	I0110 08:55:09.320372  339341 mustload.go:66] Loading cluster: default-k8s-diff-port-225354
	I0110 08:55:09.320846  339341 config.go:182] Loaded profile config "default-k8s-diff-port-225354": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:55:09.321411  339341 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-225354 --format={{.State.Status}}
	I0110 08:55:09.343922  339341 host.go:66] Checking if "default-k8s-diff-port-225354" exists ...
	I0110 08:55:09.344280  339341 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:55:09.406109  339341 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2026-01-10 08:55:09.394875143 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:55:09.406969  339341 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22376/minikube-v1.37.0-1767438792-22376-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1767438792-22376/minikube-v1.37.0-1767438792-22376-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1767438792-22376-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:default-k8s-diff-port-225354 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarni
ng:%!s(bool=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0110 08:55:09.409277  339341 out.go:179] * Pausing node default-k8s-diff-port-225354 ... 
	I0110 08:55:09.410543  339341 host.go:66] Checking if "default-k8s-diff-port-225354" exists ...
	I0110 08:55:09.410863  339341 ssh_runner.go:195] Run: systemctl --version
	I0110 08:55:09.410916  339341 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-225354
	I0110 08:55:09.432484  339341 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/default-k8s-diff-port-225354/id_rsa Username:docker}
	I0110 08:55:09.530857  339341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:55:09.546005  339341 pause.go:52] kubelet running: true
	I0110 08:55:09.546099  339341 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 08:55:09.756200  339341 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 08:55:09.756269  339341 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 08:55:09.832478  339341 cri.go:96] found id: "e01b7aef39ee6058f6e5264b1e701ec436e17ea543c0a7077a34986502eae931"
	I0110 08:55:09.832497  339341 cri.go:96] found id: "235ed1d8fdbe3c06d2c84ba29264bcc6d08d11a54a4c280982b06d15ec0b9d32"
	I0110 08:55:09.832502  339341 cri.go:96] found id: "72e24a04a184b275fa6ca7d48238546975c5ce403c3d895a3acdd96c296c0a84"
	I0110 08:55:09.832505  339341 cri.go:96] found id: "3c114dad8ad59dd4b14f99ea5527623796f92164415824fe236b0c02d4257c0b"
	I0110 08:55:09.832508  339341 cri.go:96] found id: "d433193a33ce7cc58ddea93f07610ab5f4bf6c281e65a05ab523fab1fa9029b0"
	I0110 08:55:09.832511  339341 cri.go:96] found id: "85fbcb73a888a911d321e3d1ed0152e1aa93447d76ca22015d3a09638892f2af"
	I0110 08:55:09.832514  339341 cri.go:96] found id: "6de83a52f42b4d00ef4463aa0a10635035e611d92fcb5f692497cd23e40d7676"
	I0110 08:55:09.832517  339341 cri.go:96] found id: "767f06c98be9d86d55d0cbaaa375406db22fd312258e490654cdcba950d47c27"
	I0110 08:55:09.832519  339341 cri.go:96] found id: "5055dfe1945b7e474350afd64ade8604c08027a381ce57320b00e445ef977a5c"
	I0110 08:55:09.832525  339341 cri.go:96] found id: "f4ba245850b91f72206873d0692ed94f1e4c692957ae0aab222f8c6cebe6e4e6"
	I0110 08:55:09.832528  339341 cri.go:96] found id: "803bc92acffae10929c55ac97f6e93e1c6fbc136ab07254668d7394f7b1734bc"
	I0110 08:55:09.832531  339341 cri.go:96] found id: ""
	I0110 08:55:09.832562  339341 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 08:55:09.845464  339341 retry.go:84] will retry after 200ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:55:09Z" level=error msg="open /run/runc: no such file or directory"
	I0110 08:55:10.047919  339341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:55:10.061804  339341 pause.go:52] kubelet running: false
	I0110 08:55:10.061852  339341 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 08:55:10.217924  339341 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 08:55:10.218015  339341 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 08:55:10.292158  339341 cri.go:96] found id: "e01b7aef39ee6058f6e5264b1e701ec436e17ea543c0a7077a34986502eae931"
	I0110 08:55:10.292187  339341 cri.go:96] found id: "235ed1d8fdbe3c06d2c84ba29264bcc6d08d11a54a4c280982b06d15ec0b9d32"
	I0110 08:55:10.292193  339341 cri.go:96] found id: "72e24a04a184b275fa6ca7d48238546975c5ce403c3d895a3acdd96c296c0a84"
	I0110 08:55:10.292198  339341 cri.go:96] found id: "3c114dad8ad59dd4b14f99ea5527623796f92164415824fe236b0c02d4257c0b"
	I0110 08:55:10.292203  339341 cri.go:96] found id: "d433193a33ce7cc58ddea93f07610ab5f4bf6c281e65a05ab523fab1fa9029b0"
	I0110 08:55:10.292207  339341 cri.go:96] found id: "85fbcb73a888a911d321e3d1ed0152e1aa93447d76ca22015d3a09638892f2af"
	I0110 08:55:10.292211  339341 cri.go:96] found id: "6de83a52f42b4d00ef4463aa0a10635035e611d92fcb5f692497cd23e40d7676"
	I0110 08:55:10.292215  339341 cri.go:96] found id: "767f06c98be9d86d55d0cbaaa375406db22fd312258e490654cdcba950d47c27"
	I0110 08:55:10.292220  339341 cri.go:96] found id: "5055dfe1945b7e474350afd64ade8604c08027a381ce57320b00e445ef977a5c"
	I0110 08:55:10.292230  339341 cri.go:96] found id: "f4ba245850b91f72206873d0692ed94f1e4c692957ae0aab222f8c6cebe6e4e6"
	I0110 08:55:10.292235  339341 cri.go:96] found id: "803bc92acffae10929c55ac97f6e93e1c6fbc136ab07254668d7394f7b1734bc"
	I0110 08:55:10.292239  339341 cri.go:96] found id: ""
	I0110 08:55:10.292282  339341 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 08:55:10.819834  339341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:55:10.834057  339341 pause.go:52] kubelet running: false
	I0110 08:55:10.834115  339341 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 08:55:10.967641  339341 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 08:55:10.967761  339341 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 08:55:11.035630  339341 cri.go:96] found id: "e01b7aef39ee6058f6e5264b1e701ec436e17ea543c0a7077a34986502eae931"
	I0110 08:55:11.035650  339341 cri.go:96] found id: "235ed1d8fdbe3c06d2c84ba29264bcc6d08d11a54a4c280982b06d15ec0b9d32"
	I0110 08:55:11.035654  339341 cri.go:96] found id: "72e24a04a184b275fa6ca7d48238546975c5ce403c3d895a3acdd96c296c0a84"
	I0110 08:55:11.035658  339341 cri.go:96] found id: "3c114dad8ad59dd4b14f99ea5527623796f92164415824fe236b0c02d4257c0b"
	I0110 08:55:11.035660  339341 cri.go:96] found id: "d433193a33ce7cc58ddea93f07610ab5f4bf6c281e65a05ab523fab1fa9029b0"
	I0110 08:55:11.035664  339341 cri.go:96] found id: "85fbcb73a888a911d321e3d1ed0152e1aa93447d76ca22015d3a09638892f2af"
	I0110 08:55:11.035666  339341 cri.go:96] found id: "6de83a52f42b4d00ef4463aa0a10635035e611d92fcb5f692497cd23e40d7676"
	I0110 08:55:11.035668  339341 cri.go:96] found id: "767f06c98be9d86d55d0cbaaa375406db22fd312258e490654cdcba950d47c27"
	I0110 08:55:11.035671  339341 cri.go:96] found id: "5055dfe1945b7e474350afd64ade8604c08027a381ce57320b00e445ef977a5c"
	I0110 08:55:11.035676  339341 cri.go:96] found id: "f4ba245850b91f72206873d0692ed94f1e4c692957ae0aab222f8c6cebe6e4e6"
	I0110 08:55:11.035679  339341 cri.go:96] found id: "803bc92acffae10929c55ac97f6e93e1c6fbc136ab07254668d7394f7b1734bc"
	I0110 08:55:11.035683  339341 cri.go:96] found id: ""
	I0110 08:55:11.035748  339341 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 08:55:11.484928  339341 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:55:11.509640  339341 pause.go:52] kubelet running: false
	I0110 08:55:11.509688  339341 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 08:55:11.668864  339341 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 08:55:11.668963  339341 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 08:55:11.738213  339341 cri.go:96] found id: "e01b7aef39ee6058f6e5264b1e701ec436e17ea543c0a7077a34986502eae931"
	I0110 08:55:11.738234  339341 cri.go:96] found id: "235ed1d8fdbe3c06d2c84ba29264bcc6d08d11a54a4c280982b06d15ec0b9d32"
	I0110 08:55:11.738238  339341 cri.go:96] found id: "72e24a04a184b275fa6ca7d48238546975c5ce403c3d895a3acdd96c296c0a84"
	I0110 08:55:11.738241  339341 cri.go:96] found id: "3c114dad8ad59dd4b14f99ea5527623796f92164415824fe236b0c02d4257c0b"
	I0110 08:55:11.738245  339341 cri.go:96] found id: "d433193a33ce7cc58ddea93f07610ab5f4bf6c281e65a05ab523fab1fa9029b0"
	I0110 08:55:11.738250  339341 cri.go:96] found id: "85fbcb73a888a911d321e3d1ed0152e1aa93447d76ca22015d3a09638892f2af"
	I0110 08:55:11.738254  339341 cri.go:96] found id: "6de83a52f42b4d00ef4463aa0a10635035e611d92fcb5f692497cd23e40d7676"
	I0110 08:55:11.738260  339341 cri.go:96] found id: "767f06c98be9d86d55d0cbaaa375406db22fd312258e490654cdcba950d47c27"
	I0110 08:55:11.738265  339341 cri.go:96] found id: "5055dfe1945b7e474350afd64ade8604c08027a381ce57320b00e445ef977a5c"
	I0110 08:55:11.738273  339341 cri.go:96] found id: "f4ba245850b91f72206873d0692ed94f1e4c692957ae0aab222f8c6cebe6e4e6"
	I0110 08:55:11.738278  339341 cri.go:96] found id: "803bc92acffae10929c55ac97f6e93e1c6fbc136ab07254668d7394f7b1734bc"
	I0110 08:55:11.738283  339341 cri.go:96] found id: ""
	I0110 08:55:11.738319  339341 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 08:55:11.752022  339341 out.go:203] 
	W0110 08:55:11.753153  339341 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:55:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:55:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 08:55:11.753168  339341 out.go:285] * 
	* 
	W0110 08:55:11.754910  339341 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 08:55:11.756256  339341 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-225354 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-225354
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-225354:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2d2060ee1efc6eaa651aa88ab17a771cbf4d2c84c916404a808d16a4e0ebc475",
	        "Created": "2026-01-10T08:53:05.098840342Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 323966,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T08:54:11.001513648Z",
	            "FinishedAt": "2026-01-10T08:54:09.679197355Z"
	        },
	        "Image": "sha256:e8ee619afcf8e9008c4fe3e7f2aba5fdbe7a9f0765053ca7dd53ab0df8ab02a5",
	        "ResolvConfPath": "/var/lib/docker/containers/2d2060ee1efc6eaa651aa88ab17a771cbf4d2c84c916404a808d16a4e0ebc475/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2d2060ee1efc6eaa651aa88ab17a771cbf4d2c84c916404a808d16a4e0ebc475/hostname",
	        "HostsPath": "/var/lib/docker/containers/2d2060ee1efc6eaa651aa88ab17a771cbf4d2c84c916404a808d16a4e0ebc475/hosts",
	        "LogPath": "/var/lib/docker/containers/2d2060ee1efc6eaa651aa88ab17a771cbf4d2c84c916404a808d16a4e0ebc475/2d2060ee1efc6eaa651aa88ab17a771cbf4d2c84c916404a808d16a4e0ebc475-json.log",
	        "Name": "/default-k8s-diff-port-225354",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-225354:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-225354",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2d2060ee1efc6eaa651aa88ab17a771cbf4d2c84c916404a808d16a4e0ebc475",
	                "LowerDir": "/var/lib/docker/overlay2/53567cf61aa1d670c6024da458ad8a084847f4bc5189cadc4f3bee860aaec98d-init/diff:/var/lib/docker/overlay2/97b14eb520192356c991915ce74cd6094e6ef948f398ac688b667ea18491ccde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/53567cf61aa1d670c6024da458ad8a084847f4bc5189cadc4f3bee860aaec98d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/53567cf61aa1d670c6024da458ad8a084847f4bc5189cadc4f3bee860aaec98d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/53567cf61aa1d670c6024da458ad8a084847f4bc5189cadc4f3bee860aaec98d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-225354",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-225354/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-225354",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-225354",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-225354",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "9299501efb563d962baa4d8e5f36d3c3bd071340da360f9ce9020cabda86b341",
	            "SandboxKey": "/var/run/docker/netns/9299501efb56",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-225354": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "be766c670cb0e4620e923d047ad46e6a4f2da6ed81b0b1be71e9292154f73b90",
	                    "EndpointID": "99e404ed8d686c79624e36810469bb980a4c455d789d105b65c49c7612319933",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "7e:e6:f7:7b:b2:45",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-225354",
	                        "2d2060ee1efc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-225354 -n default-k8s-diff-port-225354
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-225354 -n default-k8s-diff-port-225354: exit status 2 (358.261501ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-225354 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-225354 logs -n 25: (1.187975001s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │              PROFILE              │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ delete  │ -p old-k8s-version-093083                                                                                                                                                                                                                     │ old-k8s-version-093083            │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ image   │ no-preload-095312 image list --format=json                                                                                                                                                                                                    │ no-preload-095312                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ pause   │ -p no-preload-095312 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-095312                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ delete  │ -p old-k8s-version-093083                                                                                                                                                                                                                     │ old-k8s-version-093083            │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p newest-cni-582650 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-582650                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ delete  │ -p no-preload-095312                                                                                                                                                                                                                          │ no-preload-095312                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ delete  │ -p no-preload-095312                                                                                                                                                                                                                          │ no-preload-095312                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p test-preload-dl-gcs-424382 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                        │ test-preload-dl-gcs-424382        │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-424382                                                                                                                                                                                                                 │ test-preload-dl-gcs-424382        │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p test-preload-dl-github-434342 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                  │ test-preload-dl-github-434342     │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ image   │ embed-certs-072273 image list --format=json                                                                                                                                                                                                   │ embed-certs-072273                │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ pause   │ -p embed-certs-072273 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-072273                │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ delete  │ -p test-preload-dl-github-434342                                                                                                                                                                                                              │ test-preload-dl-github-434342     │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p test-preload-dl-gcs-cached-077581 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                 │ test-preload-dl-gcs-cached-077581 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-cached-077581                                                                                                                                                                                                          │ test-preload-dl-gcs-cached-077581 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ delete  │ -p embed-certs-072273                                                                                                                                                                                                                         │ embed-certs-072273                │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ delete  │ -p embed-certs-072273                                                                                                                                                                                                                         │ embed-certs-072273                │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ addons  │ enable metrics-server -p newest-cni-582650 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-582650                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ stop    │ -p newest-cni-582650 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-582650                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ addons  │ enable dashboard -p newest-cni-582650 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-582650                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p newest-cni-582650 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-582650                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:55 UTC │ 10 Jan 26 08:55 UTC │
	│ image   │ default-k8s-diff-port-225354 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-225354      │ jenkins │ v1.37.0 │ 10 Jan 26 08:55 UTC │ 10 Jan 26 08:55 UTC │
	│ pause   │ -p default-k8s-diff-port-225354 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-225354      │ jenkins │ v1.37.0 │ 10 Jan 26 08:55 UTC │                     │
	│ image   │ newest-cni-582650 image list --format=json                                                                                                                                                                                                    │ newest-cni-582650                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:55 UTC │ 10 Jan 26 08:55 UTC │
	│ pause   │ -p newest-cni-582650 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-582650                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 08:55:00
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 08:55:00.043379  337651 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:55:00.043525  337651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:55:00.043538  337651 out.go:374] Setting ErrFile to fd 2...
	I0110 08:55:00.043544  337651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:55:00.043842  337651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:55:00.044396  337651 out.go:368] Setting JSON to false
	I0110 08:55:00.045548  337651 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2252,"bootTime":1768033048,"procs":261,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 08:55:00.045600  337651 start.go:143] virtualization: kvm guest
	I0110 08:55:00.047536  337651 out.go:179] * [newest-cni-582650] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 08:55:00.049141  337651 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 08:55:00.049136  337651 notify.go:221] Checking for updates...
	I0110 08:55:00.051578  337651 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 08:55:00.052772  337651 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:55:00.054143  337651 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-3641/.minikube
	I0110 08:55:00.055504  337651 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 08:55:00.056874  337651 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 08:55:00.058529  337651 config.go:182] Loaded profile config "newest-cni-582650": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:55:00.059052  337651 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 08:55:00.083180  337651 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 08:55:00.083261  337651 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:55:00.139318  337651 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2026-01-10 08:55:00.129647485 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:55:00.139469  337651 docker.go:319] overlay module found
	I0110 08:55:00.141276  337651 out.go:179] * Using the docker driver based on existing profile
	I0110 08:55:00.142458  337651 start.go:309] selected driver: docker
	I0110 08:55:00.142480  337651 start.go:928] validating driver "docker" against &{Name:newest-cni-582650 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-582650 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:55:00.142582  337651 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 08:55:00.143267  337651 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:55:00.197877  337651 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2026-01-10 08:55:00.188806511 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:55:00.198241  337651 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0110 08:55:00.198276  337651 cni.go:84] Creating CNI manager for ""
	I0110 08:55:00.198348  337651 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 08:55:00.198405  337651 start.go:353] cluster config:
	{Name:newest-cni-582650 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-582650 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:55:00.200031  337651 out.go:179] * Starting "newest-cni-582650" primary control-plane node in "newest-cni-582650" cluster
	I0110 08:55:00.201239  337651 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 08:55:00.202384  337651 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 08:55:00.203414  337651 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 08:55:00.203449  337651 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0110 08:55:00.203455  337651 cache.go:65] Caching tarball of preloaded images
	I0110 08:55:00.203502  337651 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 08:55:00.203549  337651 preload.go:251] Found /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0110 08:55:00.203565  337651 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 08:55:00.203687  337651 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650/config.json ...
	I0110 08:55:00.223996  337651 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 08:55:00.224013  337651 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 08:55:00.224029  337651 cache.go:243] Successfully downloaded all kic artifacts
	I0110 08:55:00.224057  337651 start.go:360] acquireMachinesLock for newest-cni-582650: {Name:mk8a366cb6a19cf5fbfd56cf9cfee17123f828e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 08:55:00.224121  337651 start.go:364] duration metric: took 36.014µs to acquireMachinesLock for "newest-cni-582650"
	I0110 08:55:00.224137  337651 start.go:96] Skipping create...Using existing machine configuration
	I0110 08:55:00.224141  337651 fix.go:54] fixHost starting: 
	I0110 08:55:00.224354  337651 cli_runner.go:164] Run: docker container inspect newest-cni-582650 --format={{.State.Status}}
	I0110 08:55:00.241369  337651 fix.go:112] recreateIfNeeded on newest-cni-582650: state=Stopped err=<nil>
	W0110 08:55:00.241406  337651 fix.go:138] unexpected machine state, will restart: <nil>
	I0110 08:55:00.243298  337651 out.go:252] * Restarting existing docker container for "newest-cni-582650" ...
	I0110 08:55:00.243356  337651 cli_runner.go:164] Run: docker start newest-cni-582650
	I0110 08:55:00.486349  337651 cli_runner.go:164] Run: docker container inspect newest-cni-582650 --format={{.State.Status}}
	I0110 08:55:00.505501  337651 kic.go:430] container "newest-cni-582650" state is running.
	I0110 08:55:00.505877  337651 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-582650
	I0110 08:55:00.524765  337651 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650/config.json ...
	I0110 08:55:00.525042  337651 machine.go:94] provisionDockerMachine start ...
	I0110 08:55:00.525107  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:00.544567  337651 main.go:144] libmachine: Using SSH client type: native
	I0110 08:55:00.544832  337651 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0110 08:55:00.544847  337651 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 08:55:00.545519  337651 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40212->127.0.0.1:33133: read: connection reset by peer
	I0110 08:55:03.674623  337651 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-582650
	
	I0110 08:55:03.674651  337651 ubuntu.go:182] provisioning hostname "newest-cni-582650"
	I0110 08:55:03.674704  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:03.692657  337651 main.go:144] libmachine: Using SSH client type: native
	I0110 08:55:03.692890  337651 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0110 08:55:03.692907  337651 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-582650 && echo "newest-cni-582650" | sudo tee /etc/hostname
	I0110 08:55:03.828409  337651 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-582650
	
	I0110 08:55:03.828473  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:03.846317  337651 main.go:144] libmachine: Using SSH client type: native
	I0110 08:55:03.846526  337651 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0110 08:55:03.846543  337651 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-582650' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-582650/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-582650' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 08:55:03.973261  337651 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 08:55:03.973293  337651 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-3641/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-3641/.minikube}
	I0110 08:55:03.973332  337651 ubuntu.go:190] setting up certificates
	I0110 08:55:03.973353  337651 provision.go:84] configureAuth start
	I0110 08:55:03.973412  337651 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-582650
	I0110 08:55:03.991962  337651 provision.go:143] copyHostCerts
	I0110 08:55:03.992035  337651 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-3641/.minikube/ca.pem, removing ...
	I0110 08:55:03.992063  337651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-3641/.minikube/ca.pem
	I0110 08:55:03.992169  337651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-3641/.minikube/ca.pem (1078 bytes)
	I0110 08:55:03.992344  337651 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-3641/.minikube/cert.pem, removing ...
	I0110 08:55:03.992367  337651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-3641/.minikube/cert.pem
	I0110 08:55:03.992428  337651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-3641/.minikube/cert.pem (1123 bytes)
	I0110 08:55:03.992533  337651 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-3641/.minikube/key.pem, removing ...
	I0110 08:55:03.992544  337651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-3641/.minikube/key.pem
	I0110 08:55:03.992585  337651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-3641/.minikube/key.pem (1675 bytes)
	I0110 08:55:03.992659  337651 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-3641/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca-key.pem org=jenkins.newest-cni-582650 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-582650]
	I0110 08:55:04.081124  337651 provision.go:177] copyRemoteCerts
	I0110 08:55:04.081206  337651 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 08:55:04.081249  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:04.100529  337651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/newest-cni-582650/id_rsa Username:docker}
	I0110 08:55:04.194315  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0110 08:55:04.211927  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 08:55:04.229325  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0110 08:55:04.246098  337651 provision.go:87] duration metric: took 272.723804ms to configureAuth
	I0110 08:55:04.246123  337651 ubuntu.go:206] setting minikube options for container-runtime
	I0110 08:55:04.246301  337651 config.go:182] Loaded profile config "newest-cni-582650": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:55:04.246422  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:04.265307  337651 main.go:144] libmachine: Using SSH client type: native
	I0110 08:55:04.265532  337651 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0110 08:55:04.265554  337651 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 08:55:04.543910  337651 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 08:55:04.543936  337651 machine.go:97] duration metric: took 4.018877882s to provisionDockerMachine
	I0110 08:55:04.543951  337651 start.go:293] postStartSetup for "newest-cni-582650" (driver="docker")
	I0110 08:55:04.543965  337651 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 08:55:04.544023  337651 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 08:55:04.544069  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:04.562427  337651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/newest-cni-582650/id_rsa Username:docker}
	I0110 08:55:04.656029  337651 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 08:55:04.659421  337651 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 08:55:04.659453  337651 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 08:55:04.659466  337651 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-3641/.minikube/addons for local assets ...
	I0110 08:55:04.659517  337651 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-3641/.minikube/files for local assets ...
	I0110 08:55:04.659609  337651 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-3641/.minikube/files/etc/ssl/certs/71832.pem -> 71832.pem in /etc/ssl/certs
	I0110 08:55:04.659755  337651 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 08:55:04.668433  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/files/etc/ssl/certs/71832.pem --> /etc/ssl/certs/71832.pem (1708 bytes)
	I0110 08:55:04.685867  337651 start.go:296] duration metric: took 141.902418ms for postStartSetup
	I0110 08:55:04.685949  337651 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 08:55:04.686014  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:04.704239  337651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/newest-cni-582650/id_rsa Username:docker}
	I0110 08:55:04.794956  337651 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 08:55:04.799376  337651 fix.go:56] duration metric: took 4.575228964s for fixHost
	I0110 08:55:04.799403  337651 start.go:83] releasing machines lock for "newest-cni-582650", held for 4.575271886s
	I0110 08:55:04.799453  337651 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-582650
	I0110 08:55:04.817146  337651 ssh_runner.go:195] Run: cat /version.json
	I0110 08:55:04.817199  337651 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 08:55:04.817280  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:04.817203  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:04.836895  337651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/newest-cni-582650/id_rsa Username:docker}
	I0110 08:55:04.837570  337651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/newest-cni-582650/id_rsa Username:docker}
	I0110 08:55:04.980302  337651 ssh_runner.go:195] Run: systemctl --version
	I0110 08:55:04.986927  337651 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 08:55:05.021964  337651 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 08:55:05.026769  337651 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 08:55:05.026837  337651 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 08:55:05.035076  337651 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 08:55:05.035124  337651 start.go:496] detecting cgroup driver to use...
	I0110 08:55:05.035171  337651 detect.go:178] detected "systemd" cgroup driver on host os
	I0110 08:55:05.035219  337651 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 08:55:05.049316  337651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 08:55:05.061222  337651 docker.go:218] disabling cri-docker service (if available) ...
	I0110 08:55:05.061266  337651 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 08:55:05.076828  337651 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 08:55:05.088925  337651 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 08:55:05.169201  337651 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 08:55:05.250358  337651 docker.go:234] disabling docker service ...
	I0110 08:55:05.250421  337651 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 08:55:05.265340  337651 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 08:55:05.277642  337651 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 08:55:05.354970  337651 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 08:55:05.438086  337651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 08:55:05.450523  337651 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 08:55:05.464552  337651 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 08:55:05.464606  337651 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:55:05.473501  337651 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0110 08:55:05.473560  337651 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:55:05.482110  337651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:55:05.490292  337651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:55:05.498788  337651 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 08:55:05.507142  337651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:55:05.515949  337651 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:55:05.524862  337651 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:55:05.533635  337651 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 08:55:05.541045  337651 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 08:55:05.548719  337651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:55:05.628011  337651 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 08:55:05.763111  337651 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 08:55:05.763196  337651 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 08:55:05.767248  337651 start.go:574] Will wait 60s for crictl version
	I0110 08:55:05.767300  337651 ssh_runner.go:195] Run: which crictl
	I0110 08:55:05.770834  337651 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 08:55:05.795545  337651 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 08:55:05.795612  337651 ssh_runner.go:195] Run: crio --version
	I0110 08:55:05.822934  337651 ssh_runner.go:195] Run: crio --version
	I0110 08:55:05.854094  337651 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 08:55:05.855440  337651 cli_runner.go:164] Run: docker network inspect newest-cni-582650 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 08:55:05.874881  337651 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0110 08:55:05.878985  337651 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 08:55:05.890627  337651 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0110 08:55:05.891718  337651 kubeadm.go:884] updating cluster {Name:newest-cni-582650 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-582650 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 08:55:05.891861  337651 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 08:55:05.891935  337651 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 08:55:05.926755  337651 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 08:55:05.926777  337651 crio.go:433] Images already preloaded, skipping extraction
	I0110 08:55:05.926824  337651 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 08:55:05.953234  337651 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 08:55:05.953260  337651 cache_images.go:86] Images are preloaded, skipping loading
	I0110 08:55:05.953268  337651 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0 crio true true} ...
	I0110 08:55:05.953454  337651 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-582650 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-582650 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 08:55:05.953555  337651 ssh_runner.go:195] Run: crio config
	I0110 08:55:05.999327  337651 cni.go:84] Creating CNI manager for ""
	I0110 08:55:05.999360  337651 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 08:55:05.999383  337651 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I0110 08:55:05.999417  337651 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-582650 NodeName:newest-cni-582650 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 08:55:05.999536  337651 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-582650"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 08:55:05.999603  337651 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 08:55:06.008278  337651 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 08:55:06.008353  337651 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 08:55:06.015782  337651 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0110 08:55:06.028209  337651 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 08:55:06.040652  337651 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I0110 08:55:06.053361  337651 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0110 08:55:06.057091  337651 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 08:55:06.067273  337651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:55:06.148919  337651 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 08:55:06.175368  337651 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650 for IP: 192.168.94.2
	I0110 08:55:06.175391  337651 certs.go:195] generating shared ca certs ...
	I0110 08:55:06.175411  337651 certs.go:227] acquiring lock for ca certs: {Name:mk00e261408d0e9fd9be39128613c5110a764de2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:55:06.175572  337651 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-3641/.minikube/ca.key
	I0110 08:55:06.175708  337651 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-3641/.minikube/proxy-client-ca.key
	I0110 08:55:06.175769  337651 certs.go:257] generating profile certs ...
	I0110 08:55:06.175934  337651 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650/client.key
	I0110 08:55:06.176008  337651 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650/apiserver.key.0aa7c905
	I0110 08:55:06.176063  337651 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650/proxy-client.key
	I0110 08:55:06.176203  337651 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/7183.pem (1338 bytes)
	W0110 08:55:06.176248  337651 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-3641/.minikube/certs/7183_empty.pem, impossibly tiny 0 bytes
	I0110 08:55:06.176263  337651 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca-key.pem (1675 bytes)
	I0110 08:55:06.176306  337651 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem (1078 bytes)
	I0110 08:55:06.176343  337651 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/cert.pem (1123 bytes)
	I0110 08:55:06.176377  337651 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/key.pem (1675 bytes)
	I0110 08:55:06.176437  337651 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/files/etc/ssl/certs/71832.pem (1708 bytes)
	I0110 08:55:06.177184  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 08:55:06.196870  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0110 08:55:06.215933  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 08:55:06.235476  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 08:55:06.258185  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0110 08:55:06.277751  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 08:55:06.295268  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 08:55:06.312421  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0110 08:55:06.329617  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/files/etc/ssl/certs/71832.pem --> /usr/share/ca-certificates/71832.pem (1708 bytes)
	I0110 08:55:06.346649  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 08:55:06.364016  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/certs/7183.pem --> /usr/share/ca-certificates/7183.pem (1338 bytes)
	I0110 08:55:06.382003  337651 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 08:55:06.394320  337651 ssh_runner.go:195] Run: openssl version
	I0110 08:55:06.400371  337651 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/71832.pem
	I0110 08:55:06.407685  337651 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/71832.pem /etc/ssl/certs/71832.pem
	I0110 08:55:06.415138  337651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71832.pem
	I0110 08:55:06.419188  337651 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 08:23 /usr/share/ca-certificates/71832.pem
	I0110 08:55:06.419234  337651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71832.pem
	I0110 08:55:06.454164  337651 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 08:55:06.461860  337651 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:55:06.470568  337651 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 08:55:06.478089  337651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:55:06.481724  337651 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:55:06.481786  337651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:55:06.515894  337651 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 08:55:06.523865  337651 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7183.pem
	I0110 08:55:06.531389  337651 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7183.pem /etc/ssl/certs/7183.pem
	I0110 08:55:06.538646  337651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7183.pem
	I0110 08:55:06.542199  337651 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 08:23 /usr/share/ca-certificates/7183.pem
	I0110 08:55:06.542240  337651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7183.pem
	I0110 08:55:06.577649  337651 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 08:55:06.585536  337651 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 08:55:06.589317  337651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 08:55:06.625993  337651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 08:55:06.660607  337651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 08:55:06.701294  337651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 08:55:06.750337  337651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 08:55:06.795920  337651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 08:55:06.844782  337651 kubeadm.go:401] StartCluster: {Name:newest-cni-582650 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-582650 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:55:06.844904  337651 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 08:55:06.844978  337651 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 08:55:06.876617  337651 cri.go:96] found id: "7365649bf838b1d9b8c45dcaa7ce29160f5dd8674c9802b9f0610e314ca173cc"
	I0110 08:55:06.876642  337651 cri.go:96] found id: "ce0d065d2705be147f0cd136ee494369b9a709e0327cb0d06b594a233ab11c96"
	I0110 08:55:06.876646  337651 cri.go:96] found id: "04153603f19d1830f6cad025b9d59e70752a925ea51b14474fc99161af31a6c1"
	I0110 08:55:06.876650  337651 cri.go:96] found id: "90c58c3cd9924aced7da1338afb4ee8fd756be611e2c9aed303c38a538dfbacc"
	I0110 08:55:06.876653  337651 cri.go:96] found id: ""
	I0110 08:55:06.876706  337651 ssh_runner.go:195] Run: sudo runc list -f json
	W0110 08:55:06.889419  337651 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:55:06Z" level=error msg="open /run/runc: no such file or directory"
	I0110 08:55:06.889478  337651 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 08:55:06.897471  337651 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 08:55:06.897491  337651 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 08:55:06.897550  337651 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 08:55:06.905056  337651 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 08:55:06.905848  337651 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-582650" does not appear in /home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:55:06.906229  337651 kubeconfig.go:62] /home/jenkins/minikube-integration/22427-3641/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-582650" cluster setting kubeconfig missing "newest-cni-582650" context setting]
	I0110 08:55:06.906722  337651 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/kubeconfig: {Name:mk0d29b3b0ee1fd71729aff31f901be50a2d0664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:55:06.907996  337651 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 08:55:06.916223  337651 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I0110 08:55:06.916252  337651 kubeadm.go:602] duration metric: took 18.746858ms to restartPrimaryControlPlane
	I0110 08:55:06.916267  337651 kubeadm.go:403] duration metric: took 71.493899ms to StartCluster
	I0110 08:55:06.916288  337651 settings.go:142] acquiring lock: {Name:mkbb32fc6441ceb31ce2923ea8999f8375298f08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:55:06.916352  337651 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:55:06.917032  337651 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/kubeconfig: {Name:mk0d29b3b0ee1fd71729aff31f901be50a2d0664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:55:06.917252  337651 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 08:55:06.917332  337651 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 08:55:06.917423  337651 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-582650"
	I0110 08:55:06.917441  337651 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-582650"
	W0110 08:55:06.917448  337651 addons.go:248] addon storage-provisioner should already be in state true
	I0110 08:55:06.917456  337651 addons.go:70] Setting dashboard=true in profile "newest-cni-582650"
	I0110 08:55:06.917486  337651 addons.go:239] Setting addon dashboard=true in "newest-cni-582650"
	W0110 08:55:06.917500  337651 addons.go:248] addon dashboard should already be in state true
	I0110 08:55:06.917498  337651 config.go:182] Loaded profile config "newest-cni-582650": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:55:06.917505  337651 addons.go:70] Setting default-storageclass=true in profile "newest-cni-582650"
	I0110 08:55:06.917531  337651 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-582650"
	I0110 08:55:06.917545  337651 host.go:66] Checking if "newest-cni-582650" exists ...
	I0110 08:55:06.917487  337651 host.go:66] Checking if "newest-cni-582650" exists ...
	I0110 08:55:06.917888  337651 cli_runner.go:164] Run: docker container inspect newest-cni-582650 --format={{.State.Status}}
	I0110 08:55:06.918065  337651 cli_runner.go:164] Run: docker container inspect newest-cni-582650 --format={{.State.Status}}
	I0110 08:55:06.918090  337651 cli_runner.go:164] Run: docker container inspect newest-cni-582650 --format={{.State.Status}}
	I0110 08:55:06.922752  337651 out.go:179] * Verifying Kubernetes components...
	I0110 08:55:06.924557  337651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:55:06.944895  337651 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 08:55:06.944980  337651 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0110 08:55:06.946125  337651 addons.go:239] Setting addon default-storageclass=true in "newest-cni-582650"
	W0110 08:55:06.946159  337651 addons.go:248] addon default-storageclass should already be in state true
	I0110 08:55:06.946192  337651 host.go:66] Checking if "newest-cni-582650" exists ...
	I0110 08:55:06.946653  337651 cli_runner.go:164] Run: docker container inspect newest-cni-582650 --format={{.State.Status}}
	I0110 08:55:06.946876  337651 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 08:55:06.946895  337651 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 08:55:06.946956  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:06.947943  337651 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0110 08:55:06.949187  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0110 08:55:06.949212  337651 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0110 08:55:06.949272  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:06.979786  337651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/newest-cni-582650/id_rsa Username:docker}
	I0110 08:55:06.982713  337651 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 08:55:06.982757  337651 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 08:55:06.982820  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:06.986338  337651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/newest-cni-582650/id_rsa Username:docker}
	I0110 08:55:07.009531  337651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/newest-cni-582650/id_rsa Username:docker}
	I0110 08:55:07.065163  337651 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 08:55:07.081832  337651 api_server.go:52] waiting for apiserver process to appear ...
	I0110 08:55:07.081898  337651 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 08:55:07.095536  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0110 08:55:07.095562  337651 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0110 08:55:07.096364  337651 api_server.go:72] duration metric: took 179.085582ms to wait for apiserver process to appear ...
	I0110 08:55:07.096384  337651 api_server.go:88] waiting for apiserver healthz status ...
	I0110 08:55:07.096403  337651 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0110 08:55:07.100030  337651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 08:55:07.111493  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0110 08:55:07.111519  337651 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0110 08:55:07.122472  337651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 08:55:07.128466  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0110 08:55:07.128484  337651 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0110 08:55:07.144597  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0110 08:55:07.144620  337651 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0110 08:55:07.160177  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0110 08:55:07.160236  337651 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0110 08:55:07.177064  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0110 08:55:07.177088  337651 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0110 08:55:07.192696  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0110 08:55:07.192723  337651 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0110 08:55:07.207042  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0110 08:55:07.207063  337651 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0110 08:55:07.219547  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 08:55:07.219572  337651 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0110 08:55:07.232446  337651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 08:55:08.397883  337651 api_server.go:325] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0110 08:55:08.397912  337651 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0110 08:55:08.397934  337651 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0110 08:55:08.408043  337651 api_server.go:325] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0110 08:55:08.408134  337651 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0110 08:55:08.597191  337651 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0110 08:55:08.602170  337651 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 08:55:08.602223  337651 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0110 08:55:08.914012  337651 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.813953016s)
	I0110 08:55:08.914115  337651 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.791608298s)
	I0110 08:55:08.914183  337651 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.681703498s)
	I0110 08:55:08.916001  337651 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-582650 addons enable metrics-server
	
	I0110 08:55:08.924789  337651 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I0110 08:55:08.926065  337651 addons.go:530] duration metric: took 2.008739629s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I0110 08:55:09.096869  337651 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0110 08:55:09.101576  337651 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 08:55:09.101606  337651 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0110 08:55:09.597108  337651 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0110 08:55:09.606628  337651 api_server.go:325] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0110 08:55:09.608612  337651 api_server.go:141] control plane version: v1.35.0
	I0110 08:55:09.608678  337651 api_server.go:131] duration metric: took 2.512285395s to wait for apiserver health ...
	I0110 08:55:09.608701  337651 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 08:55:09.612542  337651 system_pods.go:59] 8 kube-system pods found
	I0110 08:55:09.612572  337651 system_pods.go:61] "coredns-7d764666f9-bmscc" [bc0ad55b-bbf6-4898-a38a-7a1a2d154cb3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0110 08:55:09.612580  337651 system_pods.go:61] "etcd-newest-cni-582650" [bb439312-4d17-46e1-9d07-4b972ad2299b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 08:55:09.612587  337651 system_pods.go:61] "kindnet-gp4sj" [c1167720-98b8-4850-a264-11964eb2675d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0110 08:55:09.612599  337651 system_pods.go:61] "kube-apiserver-newest-cni-582650" [947302b1-615d-4f31-976c-039fcf37be97] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 08:55:09.612607  337651 system_pods.go:61] "kube-controller-manager-newest-cni-582650" [c2156827-ae41-4c25-958a-ea329f7adf65] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 08:55:09.612614  337651 system_pods.go:61] "kube-proxy-ldmfv" [02b5ffbb-b52f-4339-bee2-b9400a4714bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0110 08:55:09.612621  337651 system_pods.go:61] "kube-scheduler-newest-cni-582650" [8d788728-c388-42a6-9bcd-9ab2bf3468fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 08:55:09.612634  337651 system_pods.go:61] "storage-provisioner" [349ec60d-a776-479e-b9a0-892989e886eb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0110 08:55:09.612643  337651 system_pods.go:74] duration metric: took 3.926901ms to wait for pod list to return data ...
	I0110 08:55:09.612653  337651 default_sa.go:34] waiting for default service account to be created ...
	I0110 08:55:09.615183  337651 default_sa.go:45] found service account: "default"
	I0110 08:55:09.615208  337651 default_sa.go:55] duration metric: took 2.548851ms for default service account to be created ...
	I0110 08:55:09.615222  337651 kubeadm.go:587] duration metric: took 2.697945894s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0110 08:55:09.615245  337651 node_conditions.go:102] verifying NodePressure condition ...
	I0110 08:55:09.617802  337651 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0110 08:55:09.617836  337651 node_conditions.go:123] node cpu capacity is 8
	I0110 08:55:09.617855  337651 node_conditions.go:105] duration metric: took 2.604361ms to run NodePressure ...
	I0110 08:55:09.617875  337651 start.go:242] waiting for startup goroutines ...
	I0110 08:55:09.617884  337651 start.go:247] waiting for cluster config update ...
	I0110 08:55:09.617898  337651 start.go:256] writing updated cluster config ...
	I0110 08:55:09.618148  337651 ssh_runner.go:195] Run: rm -f paused
	I0110 08:55:09.667016  337651 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0110 08:55:09.669727  337651 out.go:179] * Done! kubectl is now configured to use "newest-cni-582650" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 10 08:54:42 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:54:42.078972388Z" level=info msg="Started container" PID=1804 containerID=69086618990f9c502da1b3075049a6a5604434cff22757ba9df7794639a6d093 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-d9dml/dashboard-metrics-scraper id=3572ce21-e1bf-40ca-8bbe-c676b474887a name=/runtime.v1.RuntimeService/StartContainer sandboxID=6db5ee88501b7f60b1ce1831614c2769b806fab40c20ccf42bc03d438425b3ca
	Jan 10 08:54:42 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:54:42.120902328Z" level=info msg="Removing container: abdc865b79b316a8cab4eb0835c4e81a86adcfc0e06c79f1551a7e854cbe6e00" id=62b2bac4-570d-415a-b23a-67978744ff77 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 08:54:42 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:54:42.130108588Z" level=info msg="Removed container abdc865b79b316a8cab4eb0835c4e81a86adcfc0e06c79f1551a7e854cbe6e00: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-d9dml/dashboard-metrics-scraper" id=62b2bac4-570d-415a-b23a-67978744ff77 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 08:54:51 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:54:51.146298131Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1c1d9dff-9d09-4d06-bbdf-06fb8492b64d name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:54:51 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:54:51.1473432Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=bb7b500a-c3e1-4f43-bbac-01f01c7d65c1 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:54:51 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:54:51.148533873Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ee4f3157-4a7a-472c-a5be-63d520de2bea name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:54:51 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:54:51.148680472Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:51 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:54:51.153038614Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:51 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:54:51.153233242Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/3d1ec75d7051adf29c699b72e16595865f75af31caf6f818052715c689c6272c/merged/etc/passwd: no such file or directory"
	Jan 10 08:54:51 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:54:51.153268649Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3d1ec75d7051adf29c699b72e16595865f75af31caf6f818052715c689c6272c/merged/etc/group: no such file or directory"
	Jan 10 08:54:51 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:54:51.153558391Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:51 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:54:51.179939143Z" level=info msg="Created container e01b7aef39ee6058f6e5264b1e701ec436e17ea543c0a7077a34986502eae931: kube-system/storage-provisioner/storage-provisioner" id=ee4f3157-4a7a-472c-a5be-63d520de2bea name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:54:51 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:54:51.180648143Z" level=info msg="Starting container: e01b7aef39ee6058f6e5264b1e701ec436e17ea543c0a7077a34986502eae931" id=e357dc0d-7588-4bbc-a5ce-14b19c025696 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 08:54:51 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:54:51.182459742Z" level=info msg="Started container" PID=1819 containerID=e01b7aef39ee6058f6e5264b1e701ec436e17ea543c0a7077a34986502eae931 description=kube-system/storage-provisioner/storage-provisioner id=e357dc0d-7588-4bbc-a5ce-14b19c025696 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4a27bc3c57fee45d5f2b0f4d6bd667c5ff2c3d58587d409dfb97d0a3210f5082
	Jan 10 08:55:03 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:55:03.027826457Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7708ad59-2510-4568-aa76-f9c9c242686a name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:55:03 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:55:03.028793247Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=75ed01f4-7fc1-4a72-b250-1e2313fba60d name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:55:03 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:55:03.029933144Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-d9dml/dashboard-metrics-scraper" id=10d84106-4d01-4bd8-81fe-d536059b6ff9 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:55:03 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:55:03.030084845Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:55:03 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:55:03.035493917Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:55:03 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:55:03.035966615Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:55:03 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:55:03.064259529Z" level=info msg="Created container f4ba245850b91f72206873d0692ed94f1e4c692957ae0aab222f8c6cebe6e4e6: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-d9dml/dashboard-metrics-scraper" id=10d84106-4d01-4bd8-81fe-d536059b6ff9 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:55:03 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:55:03.064949963Z" level=info msg="Starting container: f4ba245850b91f72206873d0692ed94f1e4c692957ae0aab222f8c6cebe6e4e6" id=cd796f07-0d2b-4d37-8c2a-37a2e29feddb name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 08:55:03 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:55:03.066837162Z" level=info msg="Started container" PID=1860 containerID=f4ba245850b91f72206873d0692ed94f1e4c692957ae0aab222f8c6cebe6e4e6 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-d9dml/dashboard-metrics-scraper id=cd796f07-0d2b-4d37-8c2a-37a2e29feddb name=/runtime.v1.RuntimeService/StartContainer sandboxID=6db5ee88501b7f60b1ce1831614c2769b806fab40c20ccf42bc03d438425b3ca
	Jan 10 08:55:03 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:55:03.179922558Z" level=info msg="Removing container: 69086618990f9c502da1b3075049a6a5604434cff22757ba9df7794639a6d093" id=15a32fd2-adc2-4eb3-901d-b8f2ad5e9be5 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 08:55:03 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:55:03.188748556Z" level=info msg="Removed container 69086618990f9c502da1b3075049a6a5604434cff22757ba9df7794639a6d093: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-d9dml/dashboard-metrics-scraper" id=15a32fd2-adc2-4eb3-901d-b8f2ad5e9be5 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	f4ba245850b91       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           9 seconds ago       Exited              dashboard-metrics-scraper   3                   6db5ee88501b7       dashboard-metrics-scraper-867fb5f87b-d9dml             kubernetes-dashboard
	e01b7aef39ee6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   4a27bc3c57fee       storage-provisioner                                    kube-system
	803bc92acffae       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   62b3477e2a9cc       kubernetes-dashboard-b84665fb8-4pp7j                   kubernetes-dashboard
	235ed1d8fdbe3       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           52 seconds ago      Running             coredns                     0                   e56091141abb0       coredns-7d764666f9-cjklg                               kube-system
	8d2f47b9b0900       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   00a18ad303819       busybox                                                default
	72e24a04a184b       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           52 seconds ago      Running             kube-proxy                  0                   0b0c5a85f220f       kube-proxy-fbfrd                                       kube-system
	3c114dad8ad59       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           52 seconds ago      Running             kindnet-cni                 0                   5960d3ff239b1       kindnet-sd4nd                                          kube-system
	d433193a33ce7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   4a27bc3c57fee       storage-provisioner                                    kube-system
	85fbcb73a888a       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           55 seconds ago      Running             kube-controller-manager     0                   d01aa86260796       kube-controller-manager-default-k8s-diff-port-225354   kube-system
	6de83a52f42b4       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           55 seconds ago      Running             kube-scheduler              0                   cc138c284c7b7       kube-scheduler-default-k8s-diff-port-225354            kube-system
	767f06c98be9d       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           55 seconds ago      Running             etcd                        0                   971f1aa35e898       etcd-default-k8s-diff-port-225354                      kube-system
	5055dfe1945b7       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           55 seconds ago      Running             kube-apiserver              0                   d5f08b274aee6       kube-apiserver-default-k8s-diff-port-225354            kube-system
	
	
	==> coredns [235ed1d8fdbe3c06d2c84ba29264bcc6d08d11a54a4c280982b06d15ec0b9d32] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:41089 - 60580 "HINFO IN 5419416935415468060.439216048556848234. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.020744613s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-225354
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-225354
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee
	                    minikube.k8s.io/name=default-k8s-diff-port-225354
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T08_53_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 08:53:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-225354
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 08:55:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 08:54:50 +0000   Sat, 10 Jan 2026 08:53:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 08:54:50 +0000   Sat, 10 Jan 2026 08:53:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 08:54:50 +0000   Sat, 10 Jan 2026 08:53:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 08:54:50 +0000   Sat, 10 Jan 2026 08:53:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-225354
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 73d0d7ed7bc783a051b563196960b035
	  System UUID:                5a40150e-8f76-4d08-b9ae-bb32149e49ad
	  Boot ID:                    7c12f492-59b6-440b-986c-741ece916a23
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-7d764666f9-cjklg                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-default-k8s-diff-port-225354                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-sd4nd                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-default-k8s-diff-port-225354             250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-225354    200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-fbfrd                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-default-k8s-diff-port-225354             100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-d9dml              0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-4pp7j                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  109s  node-controller  Node default-k8s-diff-port-225354 event: Registered Node default-k8s-diff-port-225354 in Controller
	  Normal  RegisteredNode  50s   node-controller  Node default-k8s-diff-port-225354 event: Registered Node default-k8s-diff-port-225354 in Controller
	
	
	==> dmesg <==
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 4a 03 46 88 66 4e 08 06
	[  +6.406566] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7a 4f c9 a1 45 f4 08 06
	[  +0.001429] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca 5b 5b a9 7c eb 08 06
	[ +24.002020] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7a 6f 49 7e 9d d1 08 06
	[  +0.000366] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e 3a ac 18 98 c9 08 06
	[Jan10 08:52] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4a 05 5c 86 cf df 08 06
	[  +0.000337] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 7a 4f c9 a1 45 f4 08 06
	[  +0.000602] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca 5b 5b a9 7c eb 08 06
	[  +0.739059] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e 32 81 dc 0a 7c 08 06
	[  +1.988700] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 6e ec e4 aa 3d 08 06
	[ +20.040962] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 ac f8 cf fb 84 08 06
	[  +0.000375] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 6e ec e4 aa 3d 08 06
	[  +0.000915] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 32 81 dc 0a 7c 08 06
	
	
	==> etcd [767f06c98be9d86d55d0cbaaa375406db22fd312258e490654cdcba950d47c27] <==
	{"level":"info","ts":"2026-01-10T08:54:17.594358Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T08:54:17.594164Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2026-01-10T08:54:17.594558Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T08:54:17.594826Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-10T08:54:18.579437Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T08:54:18.579493Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T08:54:18.579572Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-10T08:54:18.579588Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T08:54:18.579617Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T08:54:18.580506Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-10T08:54:18.580550Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T08:54:18.580577Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2026-01-10T08:54:18.580590Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-10T08:54:18.581344Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:default-k8s-diff-port-225354 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T08:54:18.581397Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T08:54:18.581428Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T08:54:18.581610Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T08:54:18.581635Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T08:54:18.582985Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T08:54:18.583052Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T08:54:18.585747Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T08:54:18.585817Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2026-01-10T08:54:33.130781Z","caller":"traceutil/trace.go:172","msg":"trace[966154735] transaction","detail":"{read_only:false; response_revision:602; number_of_response:1; }","duration":"124.692788ms","start":"2026-01-10T08:54:33.006023Z","end":"2026-01-10T08:54:33.130716Z","steps":["trace[966154735] 'process raft request'  (duration: 124.436596ms)"],"step_count":1}
	{"level":"warn","ts":"2026-01-10T08:54:33.361601Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.542989ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722598312611999281 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-225354\" mod_revision:602 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-225354\" value_size:7956 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-225354\" > >>","response":"size:16"}
	{"level":"info","ts":"2026-01-10T08:54:33.361760Z","caller":"traceutil/trace.go:172","msg":"trace[1779755040] transaction","detail":"{read_only:false; response_revision:604; number_of_response:1; }","duration":"202.257974ms","start":"2026-01-10T08:54:33.159467Z","end":"2026-01-10T08:54:33.361725Z","steps":["trace[1779755040] 'process raft request'  (duration: 72.063928ms)","trace[1779755040] 'compare'  (duration: 129.420721ms)"],"step_count":2}
	
	
	==> kernel <==
	 08:55:13 up 37 min,  0 user,  load average: 5.12, 4.32, 2.81
	Linux default-k8s-diff-port-225354 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3c114dad8ad59dd4b14f99ea5527623796f92164415824fe236b0c02d4257c0b] <==
	I0110 08:54:20.511487       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 08:54:20.604095       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0110 08:54:20.605296       1 main.go:148] setting mtu 1500 for CNI 
	I0110 08:54:20.605355       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 08:54:20.605391       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T08:54:20Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 08:54:20.808349       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 08:54:20.808387       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 08:54:20.808399       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 08:54:20.808537       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 08:54:21.209240       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 08:54:21.209267       1 metrics.go:72] Registering metrics
	I0110 08:54:21.209317       1 controller.go:711] "Syncing nftables rules"
	I0110 08:54:30.809876       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 08:54:30.809916       1 main.go:301] handling current node
	I0110 08:54:40.814835       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 08:54:40.814891       1 main.go:301] handling current node
	I0110 08:54:50.809031       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 08:54:50.809066       1 main.go:301] handling current node
	I0110 08:55:00.811842       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 08:55:00.811894       1 main.go:301] handling current node
	I0110 08:55:10.815125       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 08:55:10.815159       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5055dfe1945b7e474350afd64ade8604c08027a381ce57320b00e445ef977a5c] <==
	I0110 08:54:19.615211       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0110 08:54:19.615310       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:19.616295       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0110 08:54:19.616524       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0110 08:54:19.617180       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:19.617265       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0110 08:54:19.617300       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0110 08:54:19.617328       1 shared_informer.go:377] "Caches are synced"
	E0110 08:54:19.618017       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0110 08:54:19.623030       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0110 08:54:19.623381       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 08:54:19.627529       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0110 08:54:19.632532       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 08:54:19.650911       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0110 08:54:19.879023       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 08:54:19.907650       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 08:54:19.926649       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 08:54:19.933519       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 08:54:19.944273       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 08:54:19.977464       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.236.20"}
	I0110 08:54:19.987618       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.132.10"}
	I0110 08:54:20.519035       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 08:54:23.213293       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 08:54:23.261361       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 08:54:23.362245       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [85fbcb73a888a911d321e3d1ed0152e1aa93447d76ca22015d3a09638892f2af] <==
	I0110 08:54:22.766226       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:22.766671       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:22.766726       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:22.766807       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:22.766810       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:22.766768       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:22.766955       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:22.767056       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I0110 08:54:22.767130       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="default-k8s-diff-port-225354"
	I0110 08:54:22.767179       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0110 08:54:22.767356       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:22.767397       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:22.767518       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:22.767946       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:22.768011       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:22.768049       1 range_allocator.go:177] "Sending events to api server"
	I0110 08:54:22.768092       1 range_allocator.go:181] "Starting range CIDR allocator"
	I0110 08:54:22.768107       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:54:22.768117       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:22.769273       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:22.772182       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:54:22.867364       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:22.867388       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 08:54:22.867395       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 08:54:22.872498       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [72e24a04a184b275fa6ca7d48238546975c5ce403c3d895a3acdd96c296c0a84] <==
	I0110 08:54:20.438869       1 server_linux.go:53] "Using iptables proxy"
	I0110 08:54:20.495486       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:54:20.595688       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:20.595757       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0110 08:54:20.596080       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 08:54:20.619483       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 08:54:20.619555       1 server_linux.go:136] "Using iptables Proxier"
	I0110 08:54:20.627141       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 08:54:20.627640       1 server.go:529] "Version info" version="v1.35.0"
	I0110 08:54:20.627673       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 08:54:20.630284       1 config.go:106] "Starting endpoint slice config controller"
	I0110 08:54:20.630354       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 08:54:20.630486       1 config.go:200] "Starting service config controller"
	I0110 08:54:20.630521       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 08:54:20.630522       1 config.go:309] "Starting node config controller"
	I0110 08:54:20.630843       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 08:54:20.630876       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 08:54:20.630492       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 08:54:20.630929       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 08:54:20.730561       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0110 08:54:20.730832       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 08:54:20.731136       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [6de83a52f42b4d00ef4463aa0a10635035e611d92fcb5f692497cd23e40d7676] <==
	I0110 08:54:17.855670       1 serving.go:386] Generated self-signed cert in-memory
	W0110 08:54:19.523613       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0110 08:54:19.523657       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0110 08:54:19.523669       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0110 08:54:19.523679       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0110 08:54:19.553413       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0110 08:54:19.553536       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 08:54:19.557009       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 08:54:19.557116       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:54:19.557827       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0110 08:54:19.557906       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0110 08:54:19.658075       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 08:54:33 default-k8s-diff-port-225354 kubelet[738]: E0110 08:54:33.095670     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-d9dml_kubernetes-dashboard(b7fa8b91-8b6b-46e3-9146-1e0d5fa8ce4a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-d9dml" podUID="b7fa8b91-8b6b-46e3-9146-1e0d5fa8ce4a"
	Jan 10 08:54:40 default-k8s-diff-port-225354 kubelet[738]: E0110 08:54:40.772804     738 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-d9dml" containerName="dashboard-metrics-scraper"
	Jan 10 08:54:40 default-k8s-diff-port-225354 kubelet[738]: I0110 08:54:40.772861     738 scope.go:122] "RemoveContainer" containerID="abdc865b79b316a8cab4eb0835c4e81a86adcfc0e06c79f1551a7e854cbe6e00"
	Jan 10 08:54:40 default-k8s-diff-port-225354 kubelet[738]: E0110 08:54:40.773132     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-d9dml_kubernetes-dashboard(b7fa8b91-8b6b-46e3-9146-1e0d5fa8ce4a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-d9dml" podUID="b7fa8b91-8b6b-46e3-9146-1e0d5fa8ce4a"
	Jan 10 08:54:42 default-k8s-diff-port-225354 kubelet[738]: E0110 08:54:42.026854     738 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-d9dml" containerName="dashboard-metrics-scraper"
	Jan 10 08:54:42 default-k8s-diff-port-225354 kubelet[738]: I0110 08:54:42.026927     738 scope.go:122] "RemoveContainer" containerID="abdc865b79b316a8cab4eb0835c4e81a86adcfc0e06c79f1551a7e854cbe6e00"
	Jan 10 08:54:42 default-k8s-diff-port-225354 kubelet[738]: I0110 08:54:42.119643     738 scope.go:122] "RemoveContainer" containerID="abdc865b79b316a8cab4eb0835c4e81a86adcfc0e06c79f1551a7e854cbe6e00"
	Jan 10 08:54:42 default-k8s-diff-port-225354 kubelet[738]: E0110 08:54:42.119883     738 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-d9dml" containerName="dashboard-metrics-scraper"
	Jan 10 08:54:42 default-k8s-diff-port-225354 kubelet[738]: I0110 08:54:42.119916     738 scope.go:122] "RemoveContainer" containerID="69086618990f9c502da1b3075049a6a5604434cff22757ba9df7794639a6d093"
	Jan 10 08:54:42 default-k8s-diff-port-225354 kubelet[738]: E0110 08:54:42.120128     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-d9dml_kubernetes-dashboard(b7fa8b91-8b6b-46e3-9146-1e0d5fa8ce4a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-d9dml" podUID="b7fa8b91-8b6b-46e3-9146-1e0d5fa8ce4a"
	Jan 10 08:54:50 default-k8s-diff-port-225354 kubelet[738]: E0110 08:54:50.771786     738 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-d9dml" containerName="dashboard-metrics-scraper"
	Jan 10 08:54:50 default-k8s-diff-port-225354 kubelet[738]: I0110 08:54:50.771833     738 scope.go:122] "RemoveContainer" containerID="69086618990f9c502da1b3075049a6a5604434cff22757ba9df7794639a6d093"
	Jan 10 08:54:50 default-k8s-diff-port-225354 kubelet[738]: E0110 08:54:50.772077     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-d9dml_kubernetes-dashboard(b7fa8b91-8b6b-46e3-9146-1e0d5fa8ce4a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-d9dml" podUID="b7fa8b91-8b6b-46e3-9146-1e0d5fa8ce4a"
	Jan 10 08:54:51 default-k8s-diff-port-225354 kubelet[738]: I0110 08:54:51.145807     738 scope.go:122] "RemoveContainer" containerID="d433193a33ce7cc58ddea93f07610ab5f4bf6c281e65a05ab523fab1fa9029b0"
	Jan 10 08:54:55 default-k8s-diff-port-225354 kubelet[738]: E0110 08:54:55.999892     738 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-cjklg" containerName="coredns"
	Jan 10 08:55:03 default-k8s-diff-port-225354 kubelet[738]: E0110 08:55:03.027253     738 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-d9dml" containerName="dashboard-metrics-scraper"
	Jan 10 08:55:03 default-k8s-diff-port-225354 kubelet[738]: I0110 08:55:03.027299     738 scope.go:122] "RemoveContainer" containerID="69086618990f9c502da1b3075049a6a5604434cff22757ba9df7794639a6d093"
	Jan 10 08:55:03 default-k8s-diff-port-225354 kubelet[738]: I0110 08:55:03.178434     738 scope.go:122] "RemoveContainer" containerID="69086618990f9c502da1b3075049a6a5604434cff22757ba9df7794639a6d093"
	Jan 10 08:55:03 default-k8s-diff-port-225354 kubelet[738]: E0110 08:55:03.178667     738 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-d9dml" containerName="dashboard-metrics-scraper"
	Jan 10 08:55:03 default-k8s-diff-port-225354 kubelet[738]: I0110 08:55:03.178707     738 scope.go:122] "RemoveContainer" containerID="f4ba245850b91f72206873d0692ed94f1e4c692957ae0aab222f8c6cebe6e4e6"
	Jan 10 08:55:03 default-k8s-diff-port-225354 kubelet[738]: E0110 08:55:03.178965     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-d9dml_kubernetes-dashboard(b7fa8b91-8b6b-46e3-9146-1e0d5fa8ce4a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-d9dml" podUID="b7fa8b91-8b6b-46e3-9146-1e0d5fa8ce4a"
	Jan 10 08:55:09 default-k8s-diff-port-225354 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 08:55:09 default-k8s-diff-port-225354 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 08:55:09 default-k8s-diff-port-225354 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 08:55:09 default-k8s-diff-port-225354 systemd[1]: kubelet.service: Consumed 1.831s CPU time.
	
	
	==> kubernetes-dashboard [803bc92acffae10929c55ac97f6e93e1c6fbc136ab07254668d7394f7b1734bc] <==
	2026/01/10 08:54:27 Starting overwatch
	2026/01/10 08:54:27 Using namespace: kubernetes-dashboard
	2026/01/10 08:54:27 Using in-cluster config to connect to apiserver
	2026/01/10 08:54:27 Using secret token for csrf signing
	2026/01/10 08:54:27 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/10 08:54:27 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/10 08:54:27 Successful initial request to the apiserver, version: v1.35.0
	2026/01/10 08:54:27 Generating JWE encryption key
	2026/01/10 08:54:27 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/10 08:54:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/10 08:54:27 Initializing JWE encryption key from synchronized object
	2026/01/10 08:54:27 Creating in-cluster Sidecar client
	2026/01/10 08:54:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 08:54:27 Serving insecurely on HTTP port: 9090
	2026/01/10 08:54:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [d433193a33ce7cc58ddea93f07610ab5f4bf6c281e65a05ab523fab1fa9029b0] <==
	I0110 08:54:20.402672       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0110 08:54:50.408128       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e01b7aef39ee6058f6e5264b1e701ec436e17ea543c0a7077a34986502eae931] <==
	I0110 08:54:51.195051       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 08:54:51.202807       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 08:54:51.202858       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0110 08:54:51.204860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:54.659575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:58.919689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:55:02.517795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:55:05.572187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:55:08.595363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:55:08.600798       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 08:55:08.600971       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 08:55:08.601103       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ec9aed77-6d7b-4b77-832d-6c05972cbbb9", APIVersion:"v1", ResourceVersion:"641", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-225354_057747ac-a12c-4556-b3dc-e2e3e942d42f became leader
	I0110 08:55:08.601228       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-225354_057747ac-a12c-4556-b3dc-e2e3e942d42f!
	W0110 08:55:08.603090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:55:08.606642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 08:55:08.701549       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-225354_057747ac-a12c-4556-b3dc-e2e3e942d42f!
	W0110 08:55:10.610211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:55:10.614308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:55:12.617756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:55:12.622354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-225354 -n default-k8s-diff-port-225354
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-225354 -n default-k8s-diff-port-225354: exit status 2 (363.152389ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-225354 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-225354
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-225354:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2d2060ee1efc6eaa651aa88ab17a771cbf4d2c84c916404a808d16a4e0ebc475",
	        "Created": "2026-01-10T08:53:05.098840342Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 323966,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T08:54:11.001513648Z",
	            "FinishedAt": "2026-01-10T08:54:09.679197355Z"
	        },
	        "Image": "sha256:e8ee619afcf8e9008c4fe3e7f2aba5fdbe7a9f0765053ca7dd53ab0df8ab02a5",
	        "ResolvConfPath": "/var/lib/docker/containers/2d2060ee1efc6eaa651aa88ab17a771cbf4d2c84c916404a808d16a4e0ebc475/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2d2060ee1efc6eaa651aa88ab17a771cbf4d2c84c916404a808d16a4e0ebc475/hostname",
	        "HostsPath": "/var/lib/docker/containers/2d2060ee1efc6eaa651aa88ab17a771cbf4d2c84c916404a808d16a4e0ebc475/hosts",
	        "LogPath": "/var/lib/docker/containers/2d2060ee1efc6eaa651aa88ab17a771cbf4d2c84c916404a808d16a4e0ebc475/2d2060ee1efc6eaa651aa88ab17a771cbf4d2c84c916404a808d16a4e0ebc475-json.log",
	        "Name": "/default-k8s-diff-port-225354",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-225354:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-225354",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2d2060ee1efc6eaa651aa88ab17a771cbf4d2c84c916404a808d16a4e0ebc475",
	                "LowerDir": "/var/lib/docker/overlay2/53567cf61aa1d670c6024da458ad8a084847f4bc5189cadc4f3bee860aaec98d-init/diff:/var/lib/docker/overlay2/97b14eb520192356c991915ce74cd6094e6ef948f398ac688b667ea18491ccde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/53567cf61aa1d670c6024da458ad8a084847f4bc5189cadc4f3bee860aaec98d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/53567cf61aa1d670c6024da458ad8a084847f4bc5189cadc4f3bee860aaec98d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/53567cf61aa1d670c6024da458ad8a084847f4bc5189cadc4f3bee860aaec98d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-225354",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-225354/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-225354",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-225354",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-225354",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "9299501efb563d962baa4d8e5f36d3c3bd071340da360f9ce9020cabda86b341",
	            "SandboxKey": "/var/run/docker/netns/9299501efb56",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-225354": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "be766c670cb0e4620e923d047ad46e6a4f2da6ed81b0b1be71e9292154f73b90",
	                    "EndpointID": "99e404ed8d686c79624e36810469bb980a4c455d789d105b65c49c7612319933",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "7e:e6:f7:7b:b2:45",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-225354",
	                        "2d2060ee1efc"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-225354 -n default-k8s-diff-port-225354
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-225354 -n default-k8s-diff-port-225354: exit status 2 (343.504404ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-225354 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-225354 logs -n 25: (1.144464411s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │              PROFILE              │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ delete  │ -p old-k8s-version-093083                                                                                                                                                                                                                     │ old-k8s-version-093083            │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ image   │ no-preload-095312 image list --format=json                                                                                                                                                                                                    │ no-preload-095312                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ pause   │ -p no-preload-095312 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-095312                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ delete  │ -p old-k8s-version-093083                                                                                                                                                                                                                     │ old-k8s-version-093083            │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p newest-cni-582650 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-582650                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ delete  │ -p no-preload-095312                                                                                                                                                                                                                          │ no-preload-095312                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ delete  │ -p no-preload-095312                                                                                                                                                                                                                          │ no-preload-095312                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p test-preload-dl-gcs-424382 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                        │ test-preload-dl-gcs-424382        │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-424382                                                                                                                                                                                                                 │ test-preload-dl-gcs-424382        │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p test-preload-dl-github-434342 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                  │ test-preload-dl-github-434342     │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ image   │ embed-certs-072273 image list --format=json                                                                                                                                                                                                   │ embed-certs-072273                │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ pause   │ -p embed-certs-072273 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-072273                │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ delete  │ -p test-preload-dl-github-434342                                                                                                                                                                                                              │ test-preload-dl-github-434342     │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p test-preload-dl-gcs-cached-077581 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                 │ test-preload-dl-gcs-cached-077581 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-cached-077581                                                                                                                                                                                                          │ test-preload-dl-gcs-cached-077581 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ delete  │ -p embed-certs-072273                                                                                                                                                                                                                         │ embed-certs-072273                │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ delete  │ -p embed-certs-072273                                                                                                                                                                                                                         │ embed-certs-072273                │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ addons  │ enable metrics-server -p newest-cni-582650 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-582650                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ stop    │ -p newest-cni-582650 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-582650                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ addons  │ enable dashboard -p newest-cni-582650 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-582650                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p newest-cni-582650 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-582650                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:55 UTC │ 10 Jan 26 08:55 UTC │
	│ image   │ default-k8s-diff-port-225354 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-225354      │ jenkins │ v1.37.0 │ 10 Jan 26 08:55 UTC │ 10 Jan 26 08:55 UTC │
	│ pause   │ -p default-k8s-diff-port-225354 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-225354      │ jenkins │ v1.37.0 │ 10 Jan 26 08:55 UTC │                     │
	│ image   │ newest-cni-582650 image list --format=json                                                                                                                                                                                                    │ newest-cni-582650                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:55 UTC │ 10 Jan 26 08:55 UTC │
	│ pause   │ -p newest-cni-582650 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-582650                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 08:55:00
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 08:55:00.043379  337651 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:55:00.043525  337651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:55:00.043538  337651 out.go:374] Setting ErrFile to fd 2...
	I0110 08:55:00.043544  337651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:55:00.043842  337651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:55:00.044396  337651 out.go:368] Setting JSON to false
	I0110 08:55:00.045548  337651 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2252,"bootTime":1768033048,"procs":261,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 08:55:00.045600  337651 start.go:143] virtualization: kvm guest
	I0110 08:55:00.047536  337651 out.go:179] * [newest-cni-582650] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 08:55:00.049141  337651 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 08:55:00.049136  337651 notify.go:221] Checking for updates...
	I0110 08:55:00.051578  337651 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 08:55:00.052772  337651 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:55:00.054143  337651 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-3641/.minikube
	I0110 08:55:00.055504  337651 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 08:55:00.056874  337651 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 08:55:00.058529  337651 config.go:182] Loaded profile config "newest-cni-582650": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:55:00.059052  337651 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 08:55:00.083180  337651 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 08:55:00.083261  337651 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:55:00.139318  337651 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2026-01-10 08:55:00.129647485 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:55:00.139469  337651 docker.go:319] overlay module found
	I0110 08:55:00.141276  337651 out.go:179] * Using the docker driver based on existing profile
	I0110 08:55:00.142458  337651 start.go:309] selected driver: docker
	I0110 08:55:00.142480  337651 start.go:928] validating driver "docker" against &{Name:newest-cni-582650 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-582650 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:55:00.142582  337651 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 08:55:00.143267  337651 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:55:00.197877  337651 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2026-01-10 08:55:00.188806511 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:55:00.198241  337651 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0110 08:55:00.198276  337651 cni.go:84] Creating CNI manager for ""
	I0110 08:55:00.198348  337651 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 08:55:00.198405  337651 start.go:353] cluster config:
	{Name:newest-cni-582650 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-582650 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:55:00.200031  337651 out.go:179] * Starting "newest-cni-582650" primary control-plane node in "newest-cni-582650" cluster
	I0110 08:55:00.201239  337651 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 08:55:00.202384  337651 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 08:55:00.203414  337651 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 08:55:00.203449  337651 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0110 08:55:00.203455  337651 cache.go:65] Caching tarball of preloaded images
	I0110 08:55:00.203502  337651 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 08:55:00.203549  337651 preload.go:251] Found /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0110 08:55:00.203565  337651 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 08:55:00.203687  337651 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650/config.json ...
	I0110 08:55:00.223996  337651 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 08:55:00.224013  337651 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 08:55:00.224029  337651 cache.go:243] Successfully downloaded all kic artifacts
	I0110 08:55:00.224057  337651 start.go:360] acquireMachinesLock for newest-cni-582650: {Name:mk8a366cb6a19cf5fbfd56cf9cfee17123f828e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 08:55:00.224121  337651 start.go:364] duration metric: took 36.014µs to acquireMachinesLock for "newest-cni-582650"
	I0110 08:55:00.224137  337651 start.go:96] Skipping create...Using existing machine configuration
	I0110 08:55:00.224141  337651 fix.go:54] fixHost starting: 
	I0110 08:55:00.224354  337651 cli_runner.go:164] Run: docker container inspect newest-cni-582650 --format={{.State.Status}}
	I0110 08:55:00.241369  337651 fix.go:112] recreateIfNeeded on newest-cni-582650: state=Stopped err=<nil>
	W0110 08:55:00.241406  337651 fix.go:138] unexpected machine state, will restart: <nil>
	I0110 08:55:00.243298  337651 out.go:252] * Restarting existing docker container for "newest-cni-582650" ...
	I0110 08:55:00.243356  337651 cli_runner.go:164] Run: docker start newest-cni-582650
	I0110 08:55:00.486349  337651 cli_runner.go:164] Run: docker container inspect newest-cni-582650 --format={{.State.Status}}
	I0110 08:55:00.505501  337651 kic.go:430] container "newest-cni-582650" state is running.
	I0110 08:55:00.505877  337651 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-582650
	I0110 08:55:00.524765  337651 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650/config.json ...
	I0110 08:55:00.525042  337651 machine.go:94] provisionDockerMachine start ...
	I0110 08:55:00.525107  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:00.544567  337651 main.go:144] libmachine: Using SSH client type: native
	I0110 08:55:00.544832  337651 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0110 08:55:00.544847  337651 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 08:55:00.545519  337651 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40212->127.0.0.1:33133: read: connection reset by peer
	I0110 08:55:03.674623  337651 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-582650
	
	I0110 08:55:03.674651  337651 ubuntu.go:182] provisioning hostname "newest-cni-582650"
	I0110 08:55:03.674704  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:03.692657  337651 main.go:144] libmachine: Using SSH client type: native
	I0110 08:55:03.692890  337651 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0110 08:55:03.692907  337651 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-582650 && echo "newest-cni-582650" | sudo tee /etc/hostname
	I0110 08:55:03.828409  337651 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-582650
	
	I0110 08:55:03.828473  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:03.846317  337651 main.go:144] libmachine: Using SSH client type: native
	I0110 08:55:03.846526  337651 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0110 08:55:03.846543  337651 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-582650' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-582650/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-582650' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 08:55:03.973261  337651 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 08:55:03.973293  337651 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-3641/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-3641/.minikube}
	I0110 08:55:03.973332  337651 ubuntu.go:190] setting up certificates
	I0110 08:55:03.973353  337651 provision.go:84] configureAuth start
	I0110 08:55:03.973412  337651 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-582650
	I0110 08:55:03.991962  337651 provision.go:143] copyHostCerts
	I0110 08:55:03.992035  337651 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-3641/.minikube/ca.pem, removing ...
	I0110 08:55:03.992063  337651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-3641/.minikube/ca.pem
	I0110 08:55:03.992169  337651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-3641/.minikube/ca.pem (1078 bytes)
	I0110 08:55:03.992344  337651 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-3641/.minikube/cert.pem, removing ...
	I0110 08:55:03.992367  337651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-3641/.minikube/cert.pem
	I0110 08:55:03.992428  337651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-3641/.minikube/cert.pem (1123 bytes)
	I0110 08:55:03.992533  337651 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-3641/.minikube/key.pem, removing ...
	I0110 08:55:03.992544  337651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-3641/.minikube/key.pem
	I0110 08:55:03.992585  337651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-3641/.minikube/key.pem (1675 bytes)
	I0110 08:55:03.992659  337651 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-3641/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca-key.pem org=jenkins.newest-cni-582650 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-582650]
	I0110 08:55:04.081124  337651 provision.go:177] copyRemoteCerts
	I0110 08:55:04.081206  337651 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 08:55:04.081249  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:04.100529  337651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/newest-cni-582650/id_rsa Username:docker}
	I0110 08:55:04.194315  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0110 08:55:04.211927  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 08:55:04.229325  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0110 08:55:04.246098  337651 provision.go:87] duration metric: took 272.723804ms to configureAuth
	I0110 08:55:04.246123  337651 ubuntu.go:206] setting minikube options for container-runtime
	I0110 08:55:04.246301  337651 config.go:182] Loaded profile config "newest-cni-582650": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:55:04.246422  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:04.265307  337651 main.go:144] libmachine: Using SSH client type: native
	I0110 08:55:04.265532  337651 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0110 08:55:04.265554  337651 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 08:55:04.543910  337651 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 08:55:04.543936  337651 machine.go:97] duration metric: took 4.018877882s to provisionDockerMachine
	I0110 08:55:04.543951  337651 start.go:293] postStartSetup for "newest-cni-582650" (driver="docker")
	I0110 08:55:04.543965  337651 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 08:55:04.544023  337651 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 08:55:04.544069  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:04.562427  337651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/newest-cni-582650/id_rsa Username:docker}
	I0110 08:55:04.656029  337651 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 08:55:04.659421  337651 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 08:55:04.659453  337651 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 08:55:04.659466  337651 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-3641/.minikube/addons for local assets ...
	I0110 08:55:04.659517  337651 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-3641/.minikube/files for local assets ...
	I0110 08:55:04.659609  337651 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-3641/.minikube/files/etc/ssl/certs/71832.pem -> 71832.pem in /etc/ssl/certs
	I0110 08:55:04.659755  337651 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 08:55:04.668433  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/files/etc/ssl/certs/71832.pem --> /etc/ssl/certs/71832.pem (1708 bytes)
	I0110 08:55:04.685867  337651 start.go:296] duration metric: took 141.902418ms for postStartSetup
	I0110 08:55:04.685949  337651 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 08:55:04.686014  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:04.704239  337651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/newest-cni-582650/id_rsa Username:docker}
	I0110 08:55:04.794956  337651 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 08:55:04.799376  337651 fix.go:56] duration metric: took 4.575228964s for fixHost
	I0110 08:55:04.799403  337651 start.go:83] releasing machines lock for "newest-cni-582650", held for 4.575271886s
	I0110 08:55:04.799453  337651 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-582650
	I0110 08:55:04.817146  337651 ssh_runner.go:195] Run: cat /version.json
	I0110 08:55:04.817199  337651 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 08:55:04.817280  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:04.817203  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:04.836895  337651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/newest-cni-582650/id_rsa Username:docker}
	I0110 08:55:04.837570  337651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/newest-cni-582650/id_rsa Username:docker}
	I0110 08:55:04.980302  337651 ssh_runner.go:195] Run: systemctl --version
	I0110 08:55:04.986927  337651 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 08:55:05.021964  337651 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 08:55:05.026769  337651 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 08:55:05.026837  337651 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 08:55:05.035076  337651 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 08:55:05.035124  337651 start.go:496] detecting cgroup driver to use...
	I0110 08:55:05.035171  337651 detect.go:178] detected "systemd" cgroup driver on host os
	I0110 08:55:05.035219  337651 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 08:55:05.049316  337651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 08:55:05.061222  337651 docker.go:218] disabling cri-docker service (if available) ...
	I0110 08:55:05.061266  337651 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 08:55:05.076828  337651 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 08:55:05.088925  337651 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 08:55:05.169201  337651 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 08:55:05.250358  337651 docker.go:234] disabling docker service ...
	I0110 08:55:05.250421  337651 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 08:55:05.265340  337651 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 08:55:05.277642  337651 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 08:55:05.354970  337651 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 08:55:05.438086  337651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 08:55:05.450523  337651 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 08:55:05.464552  337651 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 08:55:05.464606  337651 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:55:05.473501  337651 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0110 08:55:05.473560  337651 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:55:05.482110  337651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:55:05.490292  337651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:55:05.498788  337651 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 08:55:05.507142  337651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:55:05.515949  337651 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:55:05.524862  337651 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:55:05.533635  337651 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 08:55:05.541045  337651 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 08:55:05.548719  337651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:55:05.628011  337651 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 08:55:05.763111  337651 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 08:55:05.763196  337651 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 08:55:05.767248  337651 start.go:574] Will wait 60s for crictl version
	I0110 08:55:05.767300  337651 ssh_runner.go:195] Run: which crictl
	I0110 08:55:05.770834  337651 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 08:55:05.795545  337651 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 08:55:05.795612  337651 ssh_runner.go:195] Run: crio --version
	I0110 08:55:05.822934  337651 ssh_runner.go:195] Run: crio --version
	I0110 08:55:05.854094  337651 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 08:55:05.855440  337651 cli_runner.go:164] Run: docker network inspect newest-cni-582650 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 08:55:05.874881  337651 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0110 08:55:05.878985  337651 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 08:55:05.890627  337651 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0110 08:55:05.891718  337651 kubeadm.go:884] updating cluster {Name:newest-cni-582650 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-582650 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 08:55:05.891861  337651 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 08:55:05.891935  337651 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 08:55:05.926755  337651 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 08:55:05.926777  337651 crio.go:433] Images already preloaded, skipping extraction
	I0110 08:55:05.926824  337651 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 08:55:05.953234  337651 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 08:55:05.953260  337651 cache_images.go:86] Images are preloaded, skipping loading
	I0110 08:55:05.953268  337651 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0 crio true true} ...
	I0110 08:55:05.953454  337651 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-582650 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-582650 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 08:55:05.953555  337651 ssh_runner.go:195] Run: crio config
	I0110 08:55:05.999327  337651 cni.go:84] Creating CNI manager for ""
	I0110 08:55:05.999360  337651 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 08:55:05.999383  337651 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I0110 08:55:05.999417  337651 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-582650 NodeName:newest-cni-582650 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 08:55:05.999536  337651 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-582650"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 08:55:05.999603  337651 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 08:55:06.008278  337651 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 08:55:06.008353  337651 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 08:55:06.015782  337651 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0110 08:55:06.028209  337651 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 08:55:06.040652  337651 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I0110 08:55:06.053361  337651 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0110 08:55:06.057091  337651 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 08:55:06.067273  337651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:55:06.148919  337651 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 08:55:06.175368  337651 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650 for IP: 192.168.94.2
	I0110 08:55:06.175391  337651 certs.go:195] generating shared ca certs ...
	I0110 08:55:06.175411  337651 certs.go:227] acquiring lock for ca certs: {Name:mk00e261408d0e9fd9be39128613c5110a764de2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:55:06.175572  337651 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-3641/.minikube/ca.key
	I0110 08:55:06.175708  337651 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-3641/.minikube/proxy-client-ca.key
	I0110 08:55:06.175769  337651 certs.go:257] generating profile certs ...
	I0110 08:55:06.175934  337651 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650/client.key
	I0110 08:55:06.176008  337651 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650/apiserver.key.0aa7c905
	I0110 08:55:06.176063  337651 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650/proxy-client.key
	I0110 08:55:06.176203  337651 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/7183.pem (1338 bytes)
	W0110 08:55:06.176248  337651 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-3641/.minikube/certs/7183_empty.pem, impossibly tiny 0 bytes
	I0110 08:55:06.176263  337651 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca-key.pem (1675 bytes)
	I0110 08:55:06.176306  337651 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem (1078 bytes)
	I0110 08:55:06.176343  337651 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/cert.pem (1123 bytes)
	I0110 08:55:06.176377  337651 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/key.pem (1675 bytes)
	I0110 08:55:06.176437  337651 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/files/etc/ssl/certs/71832.pem (1708 bytes)
	I0110 08:55:06.177184  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 08:55:06.196870  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0110 08:55:06.215933  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 08:55:06.235476  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 08:55:06.258185  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0110 08:55:06.277751  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 08:55:06.295268  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 08:55:06.312421  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0110 08:55:06.329617  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/files/etc/ssl/certs/71832.pem --> /usr/share/ca-certificates/71832.pem (1708 bytes)
	I0110 08:55:06.346649  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 08:55:06.364016  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/certs/7183.pem --> /usr/share/ca-certificates/7183.pem (1338 bytes)
	I0110 08:55:06.382003  337651 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 08:55:06.394320  337651 ssh_runner.go:195] Run: openssl version
	I0110 08:55:06.400371  337651 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/71832.pem
	I0110 08:55:06.407685  337651 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/71832.pem /etc/ssl/certs/71832.pem
	I0110 08:55:06.415138  337651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71832.pem
	I0110 08:55:06.419188  337651 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 08:23 /usr/share/ca-certificates/71832.pem
	I0110 08:55:06.419234  337651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71832.pem
	I0110 08:55:06.454164  337651 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 08:55:06.461860  337651 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:55:06.470568  337651 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 08:55:06.478089  337651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:55:06.481724  337651 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:55:06.481786  337651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:55:06.515894  337651 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 08:55:06.523865  337651 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7183.pem
	I0110 08:55:06.531389  337651 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7183.pem /etc/ssl/certs/7183.pem
	I0110 08:55:06.538646  337651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7183.pem
	I0110 08:55:06.542199  337651 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 08:23 /usr/share/ca-certificates/7183.pem
	I0110 08:55:06.542240  337651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7183.pem
	I0110 08:55:06.577649  337651 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 08:55:06.585536  337651 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 08:55:06.589317  337651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 08:55:06.625993  337651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 08:55:06.660607  337651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 08:55:06.701294  337651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 08:55:06.750337  337651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 08:55:06.795920  337651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 08:55:06.844782  337651 kubeadm.go:401] StartCluster: {Name:newest-cni-582650 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-582650 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:55:06.844904  337651 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 08:55:06.844978  337651 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 08:55:06.876617  337651 cri.go:96] found id: "7365649bf838b1d9b8c45dcaa7ce29160f5dd8674c9802b9f0610e314ca173cc"
	I0110 08:55:06.876642  337651 cri.go:96] found id: "ce0d065d2705be147f0cd136ee494369b9a709e0327cb0d06b594a233ab11c96"
	I0110 08:55:06.876646  337651 cri.go:96] found id: "04153603f19d1830f6cad025b9d59e70752a925ea51b14474fc99161af31a6c1"
	I0110 08:55:06.876650  337651 cri.go:96] found id: "90c58c3cd9924aced7da1338afb4ee8fd756be611e2c9aed303c38a538dfbacc"
	I0110 08:55:06.876653  337651 cri.go:96] found id: ""
	I0110 08:55:06.876706  337651 ssh_runner.go:195] Run: sudo runc list -f json
	W0110 08:55:06.889419  337651 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:55:06Z" level=error msg="open /run/runc: no such file or directory"
	I0110 08:55:06.889478  337651 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 08:55:06.897471  337651 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 08:55:06.897491  337651 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 08:55:06.897550  337651 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 08:55:06.905056  337651 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 08:55:06.905848  337651 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-582650" does not appear in /home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:55:06.906229  337651 kubeconfig.go:62] /home/jenkins/minikube-integration/22427-3641/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-582650" cluster setting kubeconfig missing "newest-cni-582650" context setting]
	I0110 08:55:06.906722  337651 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/kubeconfig: {Name:mk0d29b3b0ee1fd71729aff31f901be50a2d0664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:55:06.907996  337651 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 08:55:06.916223  337651 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I0110 08:55:06.916252  337651 kubeadm.go:602] duration metric: took 18.746858ms to restartPrimaryControlPlane
	I0110 08:55:06.916267  337651 kubeadm.go:403] duration metric: took 71.493899ms to StartCluster
	I0110 08:55:06.916288  337651 settings.go:142] acquiring lock: {Name:mkbb32fc6441ceb31ce2923ea8999f8375298f08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:55:06.916352  337651 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:55:06.917032  337651 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/kubeconfig: {Name:mk0d29b3b0ee1fd71729aff31f901be50a2d0664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:55:06.917252  337651 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 08:55:06.917332  337651 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 08:55:06.917423  337651 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-582650"
	I0110 08:55:06.917441  337651 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-582650"
	W0110 08:55:06.917448  337651 addons.go:248] addon storage-provisioner should already be in state true
	I0110 08:55:06.917456  337651 addons.go:70] Setting dashboard=true in profile "newest-cni-582650"
	I0110 08:55:06.917486  337651 addons.go:239] Setting addon dashboard=true in "newest-cni-582650"
	W0110 08:55:06.917500  337651 addons.go:248] addon dashboard should already be in state true
	I0110 08:55:06.917498  337651 config.go:182] Loaded profile config "newest-cni-582650": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:55:06.917505  337651 addons.go:70] Setting default-storageclass=true in profile "newest-cni-582650"
	I0110 08:55:06.917531  337651 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-582650"
	I0110 08:55:06.917545  337651 host.go:66] Checking if "newest-cni-582650" exists ...
	I0110 08:55:06.917487  337651 host.go:66] Checking if "newest-cni-582650" exists ...
	I0110 08:55:06.917888  337651 cli_runner.go:164] Run: docker container inspect newest-cni-582650 --format={{.State.Status}}
	I0110 08:55:06.918065  337651 cli_runner.go:164] Run: docker container inspect newest-cni-582650 --format={{.State.Status}}
	I0110 08:55:06.918090  337651 cli_runner.go:164] Run: docker container inspect newest-cni-582650 --format={{.State.Status}}
	I0110 08:55:06.922752  337651 out.go:179] * Verifying Kubernetes components...
	I0110 08:55:06.924557  337651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:55:06.944895  337651 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 08:55:06.944980  337651 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0110 08:55:06.946125  337651 addons.go:239] Setting addon default-storageclass=true in "newest-cni-582650"
	W0110 08:55:06.946159  337651 addons.go:248] addon default-storageclass should already be in state true
	I0110 08:55:06.946192  337651 host.go:66] Checking if "newest-cni-582650" exists ...
	I0110 08:55:06.946653  337651 cli_runner.go:164] Run: docker container inspect newest-cni-582650 --format={{.State.Status}}
	I0110 08:55:06.946876  337651 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 08:55:06.946895  337651 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 08:55:06.946956  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:06.947943  337651 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0110 08:55:06.949187  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0110 08:55:06.949212  337651 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0110 08:55:06.949272  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:06.979786  337651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/newest-cni-582650/id_rsa Username:docker}
	I0110 08:55:06.982713  337651 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 08:55:06.982757  337651 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 08:55:06.982820  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:06.986338  337651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/newest-cni-582650/id_rsa Username:docker}
	I0110 08:55:07.009531  337651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/newest-cni-582650/id_rsa Username:docker}
	I0110 08:55:07.065163  337651 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 08:55:07.081832  337651 api_server.go:52] waiting for apiserver process to appear ...
	I0110 08:55:07.081898  337651 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 08:55:07.095536  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0110 08:55:07.095562  337651 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0110 08:55:07.096364  337651 api_server.go:72] duration metric: took 179.085582ms to wait for apiserver process to appear ...
	I0110 08:55:07.096384  337651 api_server.go:88] waiting for apiserver healthz status ...
	I0110 08:55:07.096403  337651 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0110 08:55:07.100030  337651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 08:55:07.111493  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0110 08:55:07.111519  337651 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0110 08:55:07.122472  337651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 08:55:07.128466  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0110 08:55:07.128484  337651 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0110 08:55:07.144597  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0110 08:55:07.144620  337651 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0110 08:55:07.160177  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0110 08:55:07.160236  337651 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0110 08:55:07.177064  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0110 08:55:07.177088  337651 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0110 08:55:07.192696  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0110 08:55:07.192723  337651 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0110 08:55:07.207042  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0110 08:55:07.207063  337651 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0110 08:55:07.219547  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 08:55:07.219572  337651 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0110 08:55:07.232446  337651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 08:55:08.397883  337651 api_server.go:325] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0110 08:55:08.397912  337651 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0110 08:55:08.397934  337651 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0110 08:55:08.408043  337651 api_server.go:325] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0110 08:55:08.408134  337651 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0110 08:55:08.597191  337651 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0110 08:55:08.602170  337651 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 08:55:08.602223  337651 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0110 08:55:08.914012  337651 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.813953016s)
	I0110 08:55:08.914115  337651 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.791608298s)
	I0110 08:55:08.914183  337651 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.681703498s)
	I0110 08:55:08.916001  337651 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-582650 addons enable metrics-server
	
	I0110 08:55:08.924789  337651 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I0110 08:55:08.926065  337651 addons.go:530] duration metric: took 2.008739629s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I0110 08:55:09.096869  337651 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0110 08:55:09.101576  337651 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 08:55:09.101606  337651 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0110 08:55:09.597108  337651 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0110 08:55:09.606628  337651 api_server.go:325] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0110 08:55:09.608612  337651 api_server.go:141] control plane version: v1.35.0
	I0110 08:55:09.608678  337651 api_server.go:131] duration metric: took 2.512285395s to wait for apiserver health ...
	I0110 08:55:09.608701  337651 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 08:55:09.612542  337651 system_pods.go:59] 8 kube-system pods found
	I0110 08:55:09.612572  337651 system_pods.go:61] "coredns-7d764666f9-bmscc" [bc0ad55b-bbf6-4898-a38a-7a1a2d154cb3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0110 08:55:09.612580  337651 system_pods.go:61] "etcd-newest-cni-582650" [bb439312-4d17-46e1-9d07-4b972ad2299b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 08:55:09.612587  337651 system_pods.go:61] "kindnet-gp4sj" [c1167720-98b8-4850-a264-11964eb2675d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0110 08:55:09.612599  337651 system_pods.go:61] "kube-apiserver-newest-cni-582650" [947302b1-615d-4f31-976c-039fcf37be97] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 08:55:09.612607  337651 system_pods.go:61] "kube-controller-manager-newest-cni-582650" [c2156827-ae41-4c25-958a-ea329f7adf65] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 08:55:09.612614  337651 system_pods.go:61] "kube-proxy-ldmfv" [02b5ffbb-b52f-4339-bee2-b9400a4714bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0110 08:55:09.612621  337651 system_pods.go:61] "kube-scheduler-newest-cni-582650" [8d788728-c388-42a6-9bcd-9ab2bf3468fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 08:55:09.612634  337651 system_pods.go:61] "storage-provisioner" [349ec60d-a776-479e-b9a0-892989e886eb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0110 08:55:09.612643  337651 system_pods.go:74] duration metric: took 3.926901ms to wait for pod list to return data ...
	I0110 08:55:09.612653  337651 default_sa.go:34] waiting for default service account to be created ...
	I0110 08:55:09.615183  337651 default_sa.go:45] found service account: "default"
	I0110 08:55:09.615208  337651 default_sa.go:55] duration metric: took 2.548851ms for default service account to be created ...
	I0110 08:55:09.615222  337651 kubeadm.go:587] duration metric: took 2.697945894s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0110 08:55:09.615245  337651 node_conditions.go:102] verifying NodePressure condition ...
	I0110 08:55:09.617802  337651 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0110 08:55:09.617836  337651 node_conditions.go:123] node cpu capacity is 8
	I0110 08:55:09.617855  337651 node_conditions.go:105] duration metric: took 2.604361ms to run NodePressure ...
	I0110 08:55:09.617875  337651 start.go:242] waiting for startup goroutines ...
	I0110 08:55:09.617884  337651 start.go:247] waiting for cluster config update ...
	I0110 08:55:09.617898  337651 start.go:256] writing updated cluster config ...
	I0110 08:55:09.618148  337651 ssh_runner.go:195] Run: rm -f paused
	I0110 08:55:09.667016  337651 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0110 08:55:09.669727  337651 out.go:179] * Done! kubectl is now configured to use "newest-cni-582650" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 10 08:54:42 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:54:42.078972388Z" level=info msg="Started container" PID=1804 containerID=69086618990f9c502da1b3075049a6a5604434cff22757ba9df7794639a6d093 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-d9dml/dashboard-metrics-scraper id=3572ce21-e1bf-40ca-8bbe-c676b474887a name=/runtime.v1.RuntimeService/StartContainer sandboxID=6db5ee88501b7f60b1ce1831614c2769b806fab40c20ccf42bc03d438425b3ca
	Jan 10 08:54:42 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:54:42.120902328Z" level=info msg="Removing container: abdc865b79b316a8cab4eb0835c4e81a86adcfc0e06c79f1551a7e854cbe6e00" id=62b2bac4-570d-415a-b23a-67978744ff77 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 08:54:42 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:54:42.130108588Z" level=info msg="Removed container abdc865b79b316a8cab4eb0835c4e81a86adcfc0e06c79f1551a7e854cbe6e00: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-d9dml/dashboard-metrics-scraper" id=62b2bac4-570d-415a-b23a-67978744ff77 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 08:54:51 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:54:51.146298131Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1c1d9dff-9d09-4d06-bbdf-06fb8492b64d name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:54:51 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:54:51.1473432Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=bb7b500a-c3e1-4f43-bbac-01f01c7d65c1 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:54:51 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:54:51.148533873Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ee4f3157-4a7a-472c-a5be-63d520de2bea name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:54:51 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:54:51.148680472Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:51 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:54:51.153038614Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:51 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:54:51.153233242Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/3d1ec75d7051adf29c699b72e16595865f75af31caf6f818052715c689c6272c/merged/etc/passwd: no such file or directory"
	Jan 10 08:54:51 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:54:51.153268649Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3d1ec75d7051adf29c699b72e16595865f75af31caf6f818052715c689c6272c/merged/etc/group: no such file or directory"
	Jan 10 08:54:51 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:54:51.153558391Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:54:51 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:54:51.179939143Z" level=info msg="Created container e01b7aef39ee6058f6e5264b1e701ec436e17ea543c0a7077a34986502eae931: kube-system/storage-provisioner/storage-provisioner" id=ee4f3157-4a7a-472c-a5be-63d520de2bea name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:54:51 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:54:51.180648143Z" level=info msg="Starting container: e01b7aef39ee6058f6e5264b1e701ec436e17ea543c0a7077a34986502eae931" id=e357dc0d-7588-4bbc-a5ce-14b19c025696 name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 08:54:51 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:54:51.182459742Z" level=info msg="Started container" PID=1819 containerID=e01b7aef39ee6058f6e5264b1e701ec436e17ea543c0a7077a34986502eae931 description=kube-system/storage-provisioner/storage-provisioner id=e357dc0d-7588-4bbc-a5ce-14b19c025696 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4a27bc3c57fee45d5f2b0f4d6bd667c5ff2c3d58587d409dfb97d0a3210f5082
	Jan 10 08:55:03 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:55:03.027826457Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7708ad59-2510-4568-aa76-f9c9c242686a name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:55:03 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:55:03.028793247Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=75ed01f4-7fc1-4a72-b250-1e2313fba60d name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:55:03 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:55:03.029933144Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-d9dml/dashboard-metrics-scraper" id=10d84106-4d01-4bd8-81fe-d536059b6ff9 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:55:03 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:55:03.030084845Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:55:03 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:55:03.035493917Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:55:03 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:55:03.035966615Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:55:03 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:55:03.064259529Z" level=info msg="Created container f4ba245850b91f72206873d0692ed94f1e4c692957ae0aab222f8c6cebe6e4e6: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-d9dml/dashboard-metrics-scraper" id=10d84106-4d01-4bd8-81fe-d536059b6ff9 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:55:03 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:55:03.064949963Z" level=info msg="Starting container: f4ba245850b91f72206873d0692ed94f1e4c692957ae0aab222f8c6cebe6e4e6" id=cd796f07-0d2b-4d37-8c2a-37a2e29feddb name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 08:55:03 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:55:03.066837162Z" level=info msg="Started container" PID=1860 containerID=f4ba245850b91f72206873d0692ed94f1e4c692957ae0aab222f8c6cebe6e4e6 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-d9dml/dashboard-metrics-scraper id=cd796f07-0d2b-4d37-8c2a-37a2e29feddb name=/runtime.v1.RuntimeService/StartContainer sandboxID=6db5ee88501b7f60b1ce1831614c2769b806fab40c20ccf42bc03d438425b3ca
	Jan 10 08:55:03 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:55:03.179922558Z" level=info msg="Removing container: 69086618990f9c502da1b3075049a6a5604434cff22757ba9df7794639a6d093" id=15a32fd2-adc2-4eb3-901d-b8f2ad5e9be5 name=/runtime.v1.RuntimeService/RemoveContainer
	Jan 10 08:55:03 default-k8s-diff-port-225354 crio[576]: time="2026-01-10T08:55:03.188748556Z" level=info msg="Removed container 69086618990f9c502da1b3075049a6a5604434cff22757ba9df7794639a6d093: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-d9dml/dashboard-metrics-scraper" id=15a32fd2-adc2-4eb3-901d-b8f2ad5e9be5 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	f4ba245850b91       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago      Exited              dashboard-metrics-scraper   3                   6db5ee88501b7       dashboard-metrics-scraper-867fb5f87b-d9dml             kubernetes-dashboard
	e01b7aef39ee6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   4a27bc3c57fee       storage-provisioner                                    kube-system
	803bc92acffae       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   47 seconds ago      Running             kubernetes-dashboard        0                   62b3477e2a9cc       kubernetes-dashboard-b84665fb8-4pp7j                   kubernetes-dashboard
	235ed1d8fdbe3       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           54 seconds ago      Running             coredns                     0                   e56091141abb0       coredns-7d764666f9-cjklg                               kube-system
	8d2f47b9b0900       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   00a18ad303819       busybox                                                default
	72e24a04a184b       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8                                           54 seconds ago      Running             kube-proxy                  0                   0b0c5a85f220f       kube-proxy-fbfrd                                       kube-system
	3c114dad8ad59       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           54 seconds ago      Running             kindnet-cni                 0                   5960d3ff239b1       kindnet-sd4nd                                          kube-system
	d433193a33ce7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   4a27bc3c57fee       storage-provisioner                                    kube-system
	85fbcb73a888a       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508                                           57 seconds ago      Running             kube-controller-manager     0                   d01aa86260796       kube-controller-manager-default-k8s-diff-port-225354   kube-system
	6de83a52f42b4       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc                                           57 seconds ago      Running             kube-scheduler              0                   cc138c284c7b7       kube-scheduler-default-k8s-diff-port-225354            kube-system
	767f06c98be9d       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           57 seconds ago      Running             etcd                        0                   971f1aa35e898       etcd-default-k8s-diff-port-225354                      kube-system
	5055dfe1945b7       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499                                           57 seconds ago      Running             kube-apiserver              0                   d5f08b274aee6       kube-apiserver-default-k8s-diff-port-225354            kube-system
	
	
	==> coredns [235ed1d8fdbe3c06d2c84ba29264bcc6d08d11a54a4c280982b06d15ec0b9d32] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:41089 - 60580 "HINFO IN 5419416935415468060.439216048556848234. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.020744613s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-225354
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-225354
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee
	                    minikube.k8s.io/name=default-k8s-diff-port-225354
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T08_53_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 08:53:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-225354
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 08:55:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 08:54:50 +0000   Sat, 10 Jan 2026 08:53:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 08:54:50 +0000   Sat, 10 Jan 2026 08:53:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 08:54:50 +0000   Sat, 10 Jan 2026 08:53:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 10 Jan 2026 08:54:50 +0000   Sat, 10 Jan 2026 08:53:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-225354
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 73d0d7ed7bc783a051b563196960b035
	  System UUID:                5a40150e-8f76-4d08-b9ae-bb32149e49ad
	  Boot ID:                    7c12f492-59b6-440b-986c-741ece916a23
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-7d764666f9-cjklg                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-default-k8s-diff-port-225354                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-sd4nd                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-default-k8s-diff-port-225354             250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-225354    200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-fbfrd                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-default-k8s-diff-port-225354             100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-d9dml              0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-4pp7j                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  111s  node-controller  Node default-k8s-diff-port-225354 event: Registered Node default-k8s-diff-port-225354 in Controller
	  Normal  RegisteredNode  52s   node-controller  Node default-k8s-diff-port-225354 event: Registered Node default-k8s-diff-port-225354 in Controller
	
	
	==> dmesg <==
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 4a 03 46 88 66 4e 08 06
	[  +6.406566] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7a 4f c9 a1 45 f4 08 06
	[  +0.001429] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca 5b 5b a9 7c eb 08 06
	[ +24.002020] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7a 6f 49 7e 9d d1 08 06
	[  +0.000366] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e 3a ac 18 98 c9 08 06
	[Jan10 08:52] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4a 05 5c 86 cf df 08 06
	[  +0.000337] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 7a 4f c9 a1 45 f4 08 06
	[  +0.000602] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca 5b 5b a9 7c eb 08 06
	[  +0.739059] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e 32 81 dc 0a 7c 08 06
	[  +1.988700] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 6e ec e4 aa 3d 08 06
	[ +20.040962] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 ac f8 cf fb 84 08 06
	[  +0.000375] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 6e ec e4 aa 3d 08 06
	[  +0.000915] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 32 81 dc 0a 7c 08 06
	
	
	==> etcd [767f06c98be9d86d55d0cbaaa375406db22fd312258e490654cdcba950d47c27] <==
	{"level":"info","ts":"2026-01-10T08:54:17.594358Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2026-01-10T08:54:17.594164Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2026-01-10T08:54:17.594558Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T08:54:17.594826Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-10T08:54:18.579437Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T08:54:18.579493Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T08:54:18.579572Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2026-01-10T08:54:18.579588Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T08:54:18.579617Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T08:54:18.580506Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-10T08:54:18.580550Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"9f0758e1c58a86ed has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T08:54:18.580577Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2026-01-10T08:54:18.580590Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2026-01-10T08:54:18.581344Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:default-k8s-diff-port-225354 ClientURLs:[https://192.168.85.2:2379]}","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T08:54:18.581397Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T08:54:18.581428Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T08:54:18.581610Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T08:54:18.581635Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T08:54:18.582985Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T08:54:18.583052Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T08:54:18.585747Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T08:54:18.585817Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2026-01-10T08:54:33.130781Z","caller":"traceutil/trace.go:172","msg":"trace[966154735] transaction","detail":"{read_only:false; response_revision:602; number_of_response:1; }","duration":"124.692788ms","start":"2026-01-10T08:54:33.006023Z","end":"2026-01-10T08:54:33.130716Z","steps":["trace[966154735] 'process raft request'  (duration: 124.436596ms)"],"step_count":1}
	{"level":"warn","ts":"2026-01-10T08:54:33.361601Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.542989ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722598312611999281 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-225354\" mod_revision:602 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-225354\" value_size:7956 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-default-k8s-diff-port-225354\" > >>","response":"size:16"}
	{"level":"info","ts":"2026-01-10T08:54:33.361760Z","caller":"traceutil/trace.go:172","msg":"trace[1779755040] transaction","detail":"{read_only:false; response_revision:604; number_of_response:1; }","duration":"202.257974ms","start":"2026-01-10T08:54:33.159467Z","end":"2026-01-10T08:54:33.361725Z","steps":["trace[1779755040] 'process raft request'  (duration: 72.063928ms)","trace[1779755040] 'compare'  (duration: 129.420721ms)"],"step_count":2}
	
	
	==> kernel <==
	 08:55:14 up 37 min,  0 user,  load average: 4.87, 4.29, 2.81
	Linux default-k8s-diff-port-225354 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3c114dad8ad59dd4b14f99ea5527623796f92164415824fe236b0c02d4257c0b] <==
	I0110 08:54:20.511487       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 08:54:20.604095       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0110 08:54:20.605296       1 main.go:148] setting mtu 1500 for CNI 
	I0110 08:54:20.605355       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 08:54:20.605391       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T08:54:20Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 08:54:20.808349       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 08:54:20.808387       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 08:54:20.808399       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 08:54:20.808537       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 08:54:21.209240       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 08:54:21.209267       1 metrics.go:72] Registering metrics
	I0110 08:54:21.209317       1 controller.go:711] "Syncing nftables rules"
	I0110 08:54:30.809876       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 08:54:30.809916       1 main.go:301] handling current node
	I0110 08:54:40.814835       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 08:54:40.814891       1 main.go:301] handling current node
	I0110 08:54:50.809031       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 08:54:50.809066       1 main.go:301] handling current node
	I0110 08:55:00.811842       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 08:55:00.811894       1 main.go:301] handling current node
	I0110 08:55:10.815125       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0110 08:55:10.815159       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5055dfe1945b7e474350afd64ade8604c08027a381ce57320b00e445ef977a5c] <==
	I0110 08:54:19.615211       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0110 08:54:19.615310       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:19.616295       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0110 08:54:19.616524       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0110 08:54:19.617180       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:19.617265       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0110 08:54:19.617300       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0110 08:54:19.617328       1 shared_informer.go:377] "Caches are synced"
	E0110 08:54:19.618017       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0110 08:54:19.623030       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0110 08:54:19.623381       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0110 08:54:19.627529       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0110 08:54:19.632532       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 08:54:19.650911       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0110 08:54:19.879023       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 08:54:19.907650       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 08:54:19.926649       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 08:54:19.933519       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 08:54:19.944273       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 08:54:19.977464       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.236.20"}
	I0110 08:54:19.987618       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.132.10"}
	I0110 08:54:20.519035       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 08:54:23.213293       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 08:54:23.261361       1 controller.go:667] quota admission added evaluator for: endpoints
	I0110 08:54:23.362245       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [85fbcb73a888a911d321e3d1ed0152e1aa93447d76ca22015d3a09638892f2af] <==
	I0110 08:54:22.766226       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:22.766671       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:22.766726       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:22.766807       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:22.766810       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:22.766768       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:22.766955       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:22.767056       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I0110 08:54:22.767130       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="default-k8s-diff-port-225354"
	I0110 08:54:22.767179       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0110 08:54:22.767356       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:22.767397       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:22.767518       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:22.767946       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:22.768011       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:22.768049       1 range_allocator.go:177] "Sending events to api server"
	I0110 08:54:22.768092       1 range_allocator.go:181] "Starting range CIDR allocator"
	I0110 08:54:22.768107       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:54:22.768117       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:22.769273       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:22.772182       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:54:22.867364       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:22.867388       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 08:54:22.867395       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I0110 08:54:22.872498       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [72e24a04a184b275fa6ca7d48238546975c5ce403c3d895a3acdd96c296c0a84] <==
	I0110 08:54:20.438869       1 server_linux.go:53] "Using iptables proxy"
	I0110 08:54:20.495486       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:54:20.595688       1 shared_informer.go:377] "Caches are synced"
	I0110 08:54:20.595757       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0110 08:54:20.596080       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 08:54:20.619483       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 08:54:20.619555       1 server_linux.go:136] "Using iptables Proxier"
	I0110 08:54:20.627141       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 08:54:20.627640       1 server.go:529] "Version info" version="v1.35.0"
	I0110 08:54:20.627673       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 08:54:20.630284       1 config.go:106] "Starting endpoint slice config controller"
	I0110 08:54:20.630354       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 08:54:20.630486       1 config.go:200] "Starting service config controller"
	I0110 08:54:20.630521       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 08:54:20.630522       1 config.go:309] "Starting node config controller"
	I0110 08:54:20.630843       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 08:54:20.630876       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 08:54:20.630492       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 08:54:20.630929       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 08:54:20.730561       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0110 08:54:20.730832       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 08:54:20.731136       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [6de83a52f42b4d00ef4463aa0a10635035e611d92fcb5f692497cd23e40d7676] <==
	I0110 08:54:17.855670       1 serving.go:386] Generated self-signed cert in-memory
	W0110 08:54:19.523613       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0110 08:54:19.523657       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0110 08:54:19.523669       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0110 08:54:19.523679       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0110 08:54:19.553413       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0110 08:54:19.553536       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 08:54:19.557009       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 08:54:19.557116       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:54:19.557827       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0110 08:54:19.557906       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0110 08:54:19.658075       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 08:54:33 default-k8s-diff-port-225354 kubelet[738]: E0110 08:54:33.095670     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-d9dml_kubernetes-dashboard(b7fa8b91-8b6b-46e3-9146-1e0d5fa8ce4a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-d9dml" podUID="b7fa8b91-8b6b-46e3-9146-1e0d5fa8ce4a"
	Jan 10 08:54:40 default-k8s-diff-port-225354 kubelet[738]: E0110 08:54:40.772804     738 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-d9dml" containerName="dashboard-metrics-scraper"
	Jan 10 08:54:40 default-k8s-diff-port-225354 kubelet[738]: I0110 08:54:40.772861     738 scope.go:122] "RemoveContainer" containerID="abdc865b79b316a8cab4eb0835c4e81a86adcfc0e06c79f1551a7e854cbe6e00"
	Jan 10 08:54:40 default-k8s-diff-port-225354 kubelet[738]: E0110 08:54:40.773132     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-d9dml_kubernetes-dashboard(b7fa8b91-8b6b-46e3-9146-1e0d5fa8ce4a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-d9dml" podUID="b7fa8b91-8b6b-46e3-9146-1e0d5fa8ce4a"
	Jan 10 08:54:42 default-k8s-diff-port-225354 kubelet[738]: E0110 08:54:42.026854     738 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-d9dml" containerName="dashboard-metrics-scraper"
	Jan 10 08:54:42 default-k8s-diff-port-225354 kubelet[738]: I0110 08:54:42.026927     738 scope.go:122] "RemoveContainer" containerID="abdc865b79b316a8cab4eb0835c4e81a86adcfc0e06c79f1551a7e854cbe6e00"
	Jan 10 08:54:42 default-k8s-diff-port-225354 kubelet[738]: I0110 08:54:42.119643     738 scope.go:122] "RemoveContainer" containerID="abdc865b79b316a8cab4eb0835c4e81a86adcfc0e06c79f1551a7e854cbe6e00"
	Jan 10 08:54:42 default-k8s-diff-port-225354 kubelet[738]: E0110 08:54:42.119883     738 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-d9dml" containerName="dashboard-metrics-scraper"
	Jan 10 08:54:42 default-k8s-diff-port-225354 kubelet[738]: I0110 08:54:42.119916     738 scope.go:122] "RemoveContainer" containerID="69086618990f9c502da1b3075049a6a5604434cff22757ba9df7794639a6d093"
	Jan 10 08:54:42 default-k8s-diff-port-225354 kubelet[738]: E0110 08:54:42.120128     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-d9dml_kubernetes-dashboard(b7fa8b91-8b6b-46e3-9146-1e0d5fa8ce4a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-d9dml" podUID="b7fa8b91-8b6b-46e3-9146-1e0d5fa8ce4a"
	Jan 10 08:54:50 default-k8s-diff-port-225354 kubelet[738]: E0110 08:54:50.771786     738 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-d9dml" containerName="dashboard-metrics-scraper"
	Jan 10 08:54:50 default-k8s-diff-port-225354 kubelet[738]: I0110 08:54:50.771833     738 scope.go:122] "RemoveContainer" containerID="69086618990f9c502da1b3075049a6a5604434cff22757ba9df7794639a6d093"
	Jan 10 08:54:50 default-k8s-diff-port-225354 kubelet[738]: E0110 08:54:50.772077     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-d9dml_kubernetes-dashboard(b7fa8b91-8b6b-46e3-9146-1e0d5fa8ce4a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-d9dml" podUID="b7fa8b91-8b6b-46e3-9146-1e0d5fa8ce4a"
	Jan 10 08:54:51 default-k8s-diff-port-225354 kubelet[738]: I0110 08:54:51.145807     738 scope.go:122] "RemoveContainer" containerID="d433193a33ce7cc58ddea93f07610ab5f4bf6c281e65a05ab523fab1fa9029b0"
	Jan 10 08:54:55 default-k8s-diff-port-225354 kubelet[738]: E0110 08:54:55.999892     738 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-cjklg" containerName="coredns"
	Jan 10 08:55:03 default-k8s-diff-port-225354 kubelet[738]: E0110 08:55:03.027253     738 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-d9dml" containerName="dashboard-metrics-scraper"
	Jan 10 08:55:03 default-k8s-diff-port-225354 kubelet[738]: I0110 08:55:03.027299     738 scope.go:122] "RemoveContainer" containerID="69086618990f9c502da1b3075049a6a5604434cff22757ba9df7794639a6d093"
	Jan 10 08:55:03 default-k8s-diff-port-225354 kubelet[738]: I0110 08:55:03.178434     738 scope.go:122] "RemoveContainer" containerID="69086618990f9c502da1b3075049a6a5604434cff22757ba9df7794639a6d093"
	Jan 10 08:55:03 default-k8s-diff-port-225354 kubelet[738]: E0110 08:55:03.178667     738 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-d9dml" containerName="dashboard-metrics-scraper"
	Jan 10 08:55:03 default-k8s-diff-port-225354 kubelet[738]: I0110 08:55:03.178707     738 scope.go:122] "RemoveContainer" containerID="f4ba245850b91f72206873d0692ed94f1e4c692957ae0aab222f8c6cebe6e4e6"
	Jan 10 08:55:03 default-k8s-diff-port-225354 kubelet[738]: E0110 08:55:03.178965     738 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-d9dml_kubernetes-dashboard(b7fa8b91-8b6b-46e3-9146-1e0d5fa8ce4a)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-d9dml" podUID="b7fa8b91-8b6b-46e3-9146-1e0d5fa8ce4a"
	Jan 10 08:55:09 default-k8s-diff-port-225354 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 08:55:09 default-k8s-diff-port-225354 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 08:55:09 default-k8s-diff-port-225354 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Jan 10 08:55:09 default-k8s-diff-port-225354 systemd[1]: kubelet.service: Consumed 1.831s CPU time.
	
	
	==> kubernetes-dashboard [803bc92acffae10929c55ac97f6e93e1c6fbc136ab07254668d7394f7b1734bc] <==
	2026/01/10 08:54:27 Starting overwatch
	2026/01/10 08:54:27 Using namespace: kubernetes-dashboard
	2026/01/10 08:54:27 Using in-cluster config to connect to apiserver
	2026/01/10 08:54:27 Using secret token for csrf signing
	2026/01/10 08:54:27 Initializing csrf token from kubernetes-dashboard-csrf secret
	2026/01/10 08:54:27 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2026/01/10 08:54:27 Successful initial request to the apiserver, version: v1.35.0
	2026/01/10 08:54:27 Generating JWE encryption key
	2026/01/10 08:54:27 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2026/01/10 08:54:27 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2026/01/10 08:54:27 Initializing JWE encryption key from synchronized object
	2026/01/10 08:54:27 Creating in-cluster Sidecar client
	2026/01/10 08:54:27 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2026/01/10 08:54:27 Serving insecurely on HTTP port: 9090
	2026/01/10 08:54:57 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [d433193a33ce7cc58ddea93f07610ab5f4bf6c281e65a05ab523fab1fa9029b0] <==
	I0110 08:54:20.402672       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0110 08:54:50.408128       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [e01b7aef39ee6058f6e5264b1e701ec436e17ea543c0a7077a34986502eae931] <==
	I0110 08:54:51.195051       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0110 08:54:51.202807       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0110 08:54:51.202858       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0110 08:54:51.204860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:54.659575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:54:58.919689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:55:02.517795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:55:05.572187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:55:08.595363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:55:08.600798       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 08:55:08.600971       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0110 08:55:08.601103       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ec9aed77-6d7b-4b77-832d-6c05972cbbb9", APIVersion:"v1", ResourceVersion:"641", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-225354_057747ac-a12c-4556-b3dc-e2e3e942d42f became leader
	I0110 08:55:08.601228       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-225354_057747ac-a12c-4556-b3dc-e2e3e942d42f!
	W0110 08:55:08.603090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:55:08.606642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0110 08:55:08.701549       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-225354_057747ac-a12c-4556-b3dc-e2e3e942d42f!
	W0110 08:55:10.610211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:55:10.614308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:55:12.617756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:55:12.622354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:55:14.626220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0110 08:55:14.631189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-225354 -n default-k8s-diff-port-225354
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-225354 -n default-k8s-diff-port-225354: exit status 2 (342.457188ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-225354 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.49s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-582650 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-582650 --alsologtostderr -v=1: exit status 80 (1.671026976s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-582650 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:55:10.347716  339914 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:55:10.348023  339914 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:55:10.348034  339914 out.go:374] Setting ErrFile to fd 2...
	I0110 08:55:10.348038  339914 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:55:10.348275  339914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:55:10.348565  339914 out.go:368] Setting JSON to false
	I0110 08:55:10.348586  339914 mustload.go:66] Loading cluster: newest-cni-582650
	I0110 08:55:10.348999  339914 config.go:182] Loaded profile config "newest-cni-582650": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:55:10.349416  339914 cli_runner.go:164] Run: docker container inspect newest-cni-582650 --format={{.State.Status}}
	I0110 08:55:10.368412  339914 host.go:66] Checking if "newest-cni-582650" exists ...
	I0110 08:55:10.368682  339914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:55:10.426313  339914 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2026-01-10 08:55:10.415045422 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:55:10.427168  339914 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22376/minikube-v1.37.0-1767438792-22376-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1767438792-22376/minikube-v1.37.0-1767438792-22376-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1767438792-22376-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) preload-source:auto profile:newest-cni-582650 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) rosetta:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool
=true) wantupdatenotification:%!s(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I0110 08:55:10.430393  339914 out.go:179] * Pausing node newest-cni-582650 ... 
	I0110 08:55:10.432796  339914 host.go:66] Checking if "newest-cni-582650" exists ...
	I0110 08:55:10.433136  339914 ssh_runner.go:195] Run: systemctl --version
	I0110 08:55:10.433196  339914 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:10.454830  339914 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/newest-cni-582650/id_rsa Username:docker}
	I0110 08:55:10.550843  339914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:55:10.563471  339914 pause.go:52] kubelet running: true
	I0110 08:55:10.563525  339914 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 08:55:10.701526  339914 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 08:55:10.701590  339914 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 08:55:10.769095  339914 cri.go:96] found id: "dafc458b0b0ed2e517d82757911d24c31a8b3d269aea3f699c10d94d701dffe2"
	I0110 08:55:10.769118  339914 cri.go:96] found id: "fe72c29c3cb566af837dc49a7fa9ff51841e0d239b1c0e7672cef1d14b2e2e1e"
	I0110 08:55:10.769123  339914 cri.go:96] found id: "7365649bf838b1d9b8c45dcaa7ce29160f5dd8674c9802b9f0610e314ca173cc"
	I0110 08:55:10.769126  339914 cri.go:96] found id: "ce0d065d2705be147f0cd136ee494369b9a709e0327cb0d06b594a233ab11c96"
	I0110 08:55:10.769129  339914 cri.go:96] found id: "04153603f19d1830f6cad025b9d59e70752a925ea51b14474fc99161af31a6c1"
	I0110 08:55:10.769135  339914 cri.go:96] found id: "90c58c3cd9924aced7da1338afb4ee8fd756be611e2c9aed303c38a538dfbacc"
	I0110 08:55:10.769138  339914 cri.go:96] found id: ""
	I0110 08:55:10.769187  339914 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 08:55:10.781455  339914 retry.go:84] will retry after 300ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:55:10Z" level=error msg="open /run/runc: no such file or directory"
	I0110 08:55:11.047828  339914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:55:11.060031  339914 pause.go:52] kubelet running: false
	I0110 08:55:11.060091  339914 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 08:55:11.171497  339914 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 08:55:11.171583  339914 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 08:55:11.236977  339914 cri.go:96] found id: "dafc458b0b0ed2e517d82757911d24c31a8b3d269aea3f699c10d94d701dffe2"
	I0110 08:55:11.237000  339914 cri.go:96] found id: "fe72c29c3cb566af837dc49a7fa9ff51841e0d239b1c0e7672cef1d14b2e2e1e"
	I0110 08:55:11.237004  339914 cri.go:96] found id: "7365649bf838b1d9b8c45dcaa7ce29160f5dd8674c9802b9f0610e314ca173cc"
	I0110 08:55:11.237007  339914 cri.go:96] found id: "ce0d065d2705be147f0cd136ee494369b9a709e0327cb0d06b594a233ab11c96"
	I0110 08:55:11.237010  339914 cri.go:96] found id: "04153603f19d1830f6cad025b9d59e70752a925ea51b14474fc99161af31a6c1"
	I0110 08:55:11.237015  339914 cri.go:96] found id: "90c58c3cd9924aced7da1338afb4ee8fd756be611e2c9aed303c38a538dfbacc"
	I0110 08:55:11.237018  339914 cri.go:96] found id: ""
	I0110 08:55:11.237054  339914 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 08:55:11.717347  339914 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:55:11.731421  339914 pause.go:52] kubelet running: false
	I0110 08:55:11.731489  339914 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I0110 08:55:11.866667  339914 cri.go:61] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I0110 08:55:11.866883  339914 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0110 08:55:11.938923  339914 cri.go:96] found id: "dafc458b0b0ed2e517d82757911d24c31a8b3d269aea3f699c10d94d701dffe2"
	I0110 08:55:11.938944  339914 cri.go:96] found id: "fe72c29c3cb566af837dc49a7fa9ff51841e0d239b1c0e7672cef1d14b2e2e1e"
	I0110 08:55:11.938949  339914 cri.go:96] found id: "7365649bf838b1d9b8c45dcaa7ce29160f5dd8674c9802b9f0610e314ca173cc"
	I0110 08:55:11.938987  339914 cri.go:96] found id: "ce0d065d2705be147f0cd136ee494369b9a709e0327cb0d06b594a233ab11c96"
	I0110 08:55:11.938999  339914 cri.go:96] found id: "04153603f19d1830f6cad025b9d59e70752a925ea51b14474fc99161af31a6c1"
	I0110 08:55:11.939008  339914 cri.go:96] found id: "90c58c3cd9924aced7da1338afb4ee8fd756be611e2c9aed303c38a538dfbacc"
	I0110 08:55:11.939013  339914 cri.go:96] found id: ""
	I0110 08:55:11.939055  339914 ssh_runner.go:195] Run: sudo runc list -f json
	I0110 08:55:11.953682  339914 out.go:203] 
	W0110 08:55:11.955040  339914 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:55:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:55:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W0110 08:55:11.955060  339914 out.go:285] * 
	* 
	W0110 08:55:11.956917  339914 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0110 08:55:11.958166  339914 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-582650 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-582650
helpers_test.go:244: (dbg) docker inspect newest-cni-582650:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4dbf07d4b1628a1acd5c2e257abf85d859efa39216a44d32788ff5dc3a82de51",
	        "Created": "2026-01-10T08:54:34.771145794Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 337849,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T08:55:00.268116687Z",
	            "FinishedAt": "2026-01-10T08:54:59.391516804Z"
	        },
	        "Image": "sha256:e8ee619afcf8e9008c4fe3e7f2aba5fdbe7a9f0765053ca7dd53ab0df8ab02a5",
	        "ResolvConfPath": "/var/lib/docker/containers/4dbf07d4b1628a1acd5c2e257abf85d859efa39216a44d32788ff5dc3a82de51/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4dbf07d4b1628a1acd5c2e257abf85d859efa39216a44d32788ff5dc3a82de51/hostname",
	        "HostsPath": "/var/lib/docker/containers/4dbf07d4b1628a1acd5c2e257abf85d859efa39216a44d32788ff5dc3a82de51/hosts",
	        "LogPath": "/var/lib/docker/containers/4dbf07d4b1628a1acd5c2e257abf85d859efa39216a44d32788ff5dc3a82de51/4dbf07d4b1628a1acd5c2e257abf85d859efa39216a44d32788ff5dc3a82de51-json.log",
	        "Name": "/newest-cni-582650",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-582650:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-582650",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4dbf07d4b1628a1acd5c2e257abf85d859efa39216a44d32788ff5dc3a82de51",
	                "LowerDir": "/var/lib/docker/overlay2/cecf9ccbd369e95c2f1fad3e86ddbefa88377e415cb790180c787df246182877-init/diff:/var/lib/docker/overlay2/97b14eb520192356c991915ce74cd6094e6ef948f398ac688b667ea18491ccde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cecf9ccbd369e95c2f1fad3e86ddbefa88377e415cb790180c787df246182877/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cecf9ccbd369e95c2f1fad3e86ddbefa88377e415cb790180c787df246182877/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cecf9ccbd369e95c2f1fad3e86ddbefa88377e415cb790180c787df246182877/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-582650",
	                "Source": "/var/lib/docker/volumes/newest-cni-582650/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-582650",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-582650",
	                "name.minikube.sigs.k8s.io": "newest-cni-582650",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b890a07c2b1d3b75264d0fa2fafaa9072018523e7719c3bc0cbd2311f467df07",
	            "SandboxKey": "/var/run/docker/netns/b890a07c2b1d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-582650": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "075f874b6857901f9e3f8b443cec464881c99fdb29213454b1860411dcc7e5ce",
	                    "EndpointID": "7e12442882becd0197f48f91aa7ee55a5f68b509c45969230c5a527c0ebbd7c0",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "7a:9f:1d:83:6d:0f",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-582650",
	                        "4dbf07d4b162"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-582650 -n newest-cni-582650
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-582650 -n newest-cni-582650: exit status 2 (368.895187ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-582650 logs -n 25
E0110 08:55:12.590153    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/auto-472660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-582650 logs -n 25: (1.002018897s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │              PROFILE              │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ delete  │ -p old-k8s-version-093083                                                                                                                                                                                                                     │ old-k8s-version-093083            │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ image   │ no-preload-095312 image list --format=json                                                                                                                                                                                                    │ no-preload-095312                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ pause   │ -p no-preload-095312 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-095312                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ delete  │ -p old-k8s-version-093083                                                                                                                                                                                                                     │ old-k8s-version-093083            │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p newest-cni-582650 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-582650                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ delete  │ -p no-preload-095312                                                                                                                                                                                                                          │ no-preload-095312                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ delete  │ -p no-preload-095312                                                                                                                                                                                                                          │ no-preload-095312                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p test-preload-dl-gcs-424382 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                        │ test-preload-dl-gcs-424382        │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-424382                                                                                                                                                                                                                 │ test-preload-dl-gcs-424382        │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p test-preload-dl-github-434342 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                  │ test-preload-dl-github-434342     │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ image   │ embed-certs-072273 image list --format=json                                                                                                                                                                                                   │ embed-certs-072273                │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ pause   │ -p embed-certs-072273 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-072273                │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ delete  │ -p test-preload-dl-github-434342                                                                                                                                                                                                              │ test-preload-dl-github-434342     │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p test-preload-dl-gcs-cached-077581 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                 │ test-preload-dl-gcs-cached-077581 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-cached-077581                                                                                                                                                                                                          │ test-preload-dl-gcs-cached-077581 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ delete  │ -p embed-certs-072273                                                                                                                                                                                                                         │ embed-certs-072273                │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ delete  │ -p embed-certs-072273                                                                                                                                                                                                                         │ embed-certs-072273                │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ addons  │ enable metrics-server -p newest-cni-582650 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-582650                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ stop    │ -p newest-cni-582650 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-582650                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ addons  │ enable dashboard -p newest-cni-582650 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-582650                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p newest-cni-582650 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-582650                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:55 UTC │ 10 Jan 26 08:55 UTC │
	│ image   │ default-k8s-diff-port-225354 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-225354      │ jenkins │ v1.37.0 │ 10 Jan 26 08:55 UTC │ 10 Jan 26 08:55 UTC │
	│ pause   │ -p default-k8s-diff-port-225354 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-225354      │ jenkins │ v1.37.0 │ 10 Jan 26 08:55 UTC │                     │
	│ image   │ newest-cni-582650 image list --format=json                                                                                                                                                                                                    │ newest-cni-582650                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:55 UTC │ 10 Jan 26 08:55 UTC │
	│ pause   │ -p newest-cni-582650 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-582650                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 08:55:00
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 08:55:00.043379  337651 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:55:00.043525  337651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:55:00.043538  337651 out.go:374] Setting ErrFile to fd 2...
	I0110 08:55:00.043544  337651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:55:00.043842  337651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:55:00.044396  337651 out.go:368] Setting JSON to false
	I0110 08:55:00.045548  337651 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2252,"bootTime":1768033048,"procs":261,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 08:55:00.045600  337651 start.go:143] virtualization: kvm guest
	I0110 08:55:00.047536  337651 out.go:179] * [newest-cni-582650] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 08:55:00.049141  337651 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 08:55:00.049136  337651 notify.go:221] Checking for updates...
	I0110 08:55:00.051578  337651 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 08:55:00.052772  337651 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:55:00.054143  337651 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-3641/.minikube
	I0110 08:55:00.055504  337651 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 08:55:00.056874  337651 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 08:55:00.058529  337651 config.go:182] Loaded profile config "newest-cni-582650": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:55:00.059052  337651 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 08:55:00.083180  337651 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 08:55:00.083261  337651 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:55:00.139318  337651 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2026-01-10 08:55:00.129647485 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:55:00.139469  337651 docker.go:319] overlay module found
	I0110 08:55:00.141276  337651 out.go:179] * Using the docker driver based on existing profile
	I0110 08:55:00.142458  337651 start.go:309] selected driver: docker
	I0110 08:55:00.142480  337651 start.go:928] validating driver "docker" against &{Name:newest-cni-582650 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-582650 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:55:00.142582  337651 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 08:55:00.143267  337651 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:55:00.197877  337651 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2026-01-10 08:55:00.188806511 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:55:00.198241  337651 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0110 08:55:00.198276  337651 cni.go:84] Creating CNI manager for ""
	I0110 08:55:00.198348  337651 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 08:55:00.198405  337651 start.go:353] cluster config:
	{Name:newest-cni-582650 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-582650 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:55:00.200031  337651 out.go:179] * Starting "newest-cni-582650" primary control-plane node in "newest-cni-582650" cluster
	I0110 08:55:00.201239  337651 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 08:55:00.202384  337651 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 08:55:00.203414  337651 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 08:55:00.203449  337651 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0110 08:55:00.203455  337651 cache.go:65] Caching tarball of preloaded images
	I0110 08:55:00.203502  337651 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 08:55:00.203549  337651 preload.go:251] Found /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0110 08:55:00.203565  337651 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 08:55:00.203687  337651 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650/config.json ...
	I0110 08:55:00.223996  337651 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 08:55:00.224013  337651 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 08:55:00.224029  337651 cache.go:243] Successfully downloaded all kic artifacts
	I0110 08:55:00.224057  337651 start.go:360] acquireMachinesLock for newest-cni-582650: {Name:mk8a366cb6a19cf5fbfd56cf9cfee17123f828e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 08:55:00.224121  337651 start.go:364] duration metric: took 36.014µs to acquireMachinesLock for "newest-cni-582650"
	I0110 08:55:00.224137  337651 start.go:96] Skipping create...Using existing machine configuration
	I0110 08:55:00.224141  337651 fix.go:54] fixHost starting: 
	I0110 08:55:00.224354  337651 cli_runner.go:164] Run: docker container inspect newest-cni-582650 --format={{.State.Status}}
	I0110 08:55:00.241369  337651 fix.go:112] recreateIfNeeded on newest-cni-582650: state=Stopped err=<nil>
	W0110 08:55:00.241406  337651 fix.go:138] unexpected machine state, will restart: <nil>
	I0110 08:55:00.243298  337651 out.go:252] * Restarting existing docker container for "newest-cni-582650" ...
	I0110 08:55:00.243356  337651 cli_runner.go:164] Run: docker start newest-cni-582650
	I0110 08:55:00.486349  337651 cli_runner.go:164] Run: docker container inspect newest-cni-582650 --format={{.State.Status}}
	I0110 08:55:00.505501  337651 kic.go:430] container "newest-cni-582650" state is running.
	I0110 08:55:00.505877  337651 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-582650
	I0110 08:55:00.524765  337651 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650/config.json ...
	I0110 08:55:00.525042  337651 machine.go:94] provisionDockerMachine start ...
	I0110 08:55:00.525107  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:00.544567  337651 main.go:144] libmachine: Using SSH client type: native
	I0110 08:55:00.544832  337651 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0110 08:55:00.544847  337651 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 08:55:00.545519  337651 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40212->127.0.0.1:33133: read: connection reset by peer
	I0110 08:55:03.674623  337651 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-582650
	
	I0110 08:55:03.674651  337651 ubuntu.go:182] provisioning hostname "newest-cni-582650"
	I0110 08:55:03.674704  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:03.692657  337651 main.go:144] libmachine: Using SSH client type: native
	I0110 08:55:03.692890  337651 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0110 08:55:03.692907  337651 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-582650 && echo "newest-cni-582650" | sudo tee /etc/hostname
	I0110 08:55:03.828409  337651 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-582650
	
	I0110 08:55:03.828473  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:03.846317  337651 main.go:144] libmachine: Using SSH client type: native
	I0110 08:55:03.846526  337651 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0110 08:55:03.846543  337651 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-582650' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-582650/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-582650' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 08:55:03.973261  337651 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 08:55:03.973293  337651 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-3641/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-3641/.minikube}
	I0110 08:55:03.973332  337651 ubuntu.go:190] setting up certificates
	I0110 08:55:03.973353  337651 provision.go:84] configureAuth start
	I0110 08:55:03.973412  337651 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-582650
	I0110 08:55:03.991962  337651 provision.go:143] copyHostCerts
	I0110 08:55:03.992035  337651 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-3641/.minikube/ca.pem, removing ...
	I0110 08:55:03.992063  337651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-3641/.minikube/ca.pem
	I0110 08:55:03.992169  337651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-3641/.minikube/ca.pem (1078 bytes)
	I0110 08:55:03.992344  337651 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-3641/.minikube/cert.pem, removing ...
	I0110 08:55:03.992367  337651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-3641/.minikube/cert.pem
	I0110 08:55:03.992428  337651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-3641/.minikube/cert.pem (1123 bytes)
	I0110 08:55:03.992533  337651 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-3641/.minikube/key.pem, removing ...
	I0110 08:55:03.992544  337651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-3641/.minikube/key.pem
	I0110 08:55:03.992585  337651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-3641/.minikube/key.pem (1675 bytes)
	I0110 08:55:03.992659  337651 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-3641/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca-key.pem org=jenkins.newest-cni-582650 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-582650]
	I0110 08:55:04.081124  337651 provision.go:177] copyRemoteCerts
	I0110 08:55:04.081206  337651 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 08:55:04.081249  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:04.100529  337651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/newest-cni-582650/id_rsa Username:docker}
	I0110 08:55:04.194315  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0110 08:55:04.211927  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 08:55:04.229325  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0110 08:55:04.246098  337651 provision.go:87] duration metric: took 272.723804ms to configureAuth
	I0110 08:55:04.246123  337651 ubuntu.go:206] setting minikube options for container-runtime
	I0110 08:55:04.246301  337651 config.go:182] Loaded profile config "newest-cni-582650": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:55:04.246422  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:04.265307  337651 main.go:144] libmachine: Using SSH client type: native
	I0110 08:55:04.265532  337651 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0110 08:55:04.265554  337651 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 08:55:04.543910  337651 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 08:55:04.543936  337651 machine.go:97] duration metric: took 4.018877882s to provisionDockerMachine
	I0110 08:55:04.543951  337651 start.go:293] postStartSetup for "newest-cni-582650" (driver="docker")
	I0110 08:55:04.543965  337651 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 08:55:04.544023  337651 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 08:55:04.544069  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:04.562427  337651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/newest-cni-582650/id_rsa Username:docker}
	I0110 08:55:04.656029  337651 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 08:55:04.659421  337651 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 08:55:04.659453  337651 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 08:55:04.659466  337651 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-3641/.minikube/addons for local assets ...
	I0110 08:55:04.659517  337651 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-3641/.minikube/files for local assets ...
	I0110 08:55:04.659609  337651 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-3641/.minikube/files/etc/ssl/certs/71832.pem -> 71832.pem in /etc/ssl/certs
	I0110 08:55:04.659755  337651 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 08:55:04.668433  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/files/etc/ssl/certs/71832.pem --> /etc/ssl/certs/71832.pem (1708 bytes)
	I0110 08:55:04.685867  337651 start.go:296] duration metric: took 141.902418ms for postStartSetup
	I0110 08:55:04.685949  337651 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 08:55:04.686014  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:04.704239  337651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/newest-cni-582650/id_rsa Username:docker}
	I0110 08:55:04.794956  337651 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 08:55:04.799376  337651 fix.go:56] duration metric: took 4.575228964s for fixHost
	I0110 08:55:04.799403  337651 start.go:83] releasing machines lock for "newest-cni-582650", held for 4.575271886s
	I0110 08:55:04.799453  337651 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-582650
	I0110 08:55:04.817146  337651 ssh_runner.go:195] Run: cat /version.json
	I0110 08:55:04.817199  337651 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 08:55:04.817280  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:04.817203  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:04.836895  337651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/newest-cni-582650/id_rsa Username:docker}
	I0110 08:55:04.837570  337651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/newest-cni-582650/id_rsa Username:docker}
	I0110 08:55:04.980302  337651 ssh_runner.go:195] Run: systemctl --version
	I0110 08:55:04.986927  337651 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 08:55:05.021964  337651 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 08:55:05.026769  337651 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 08:55:05.026837  337651 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 08:55:05.035076  337651 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 08:55:05.035124  337651 start.go:496] detecting cgroup driver to use...
	I0110 08:55:05.035171  337651 detect.go:178] detected "systemd" cgroup driver on host os
	I0110 08:55:05.035219  337651 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 08:55:05.049316  337651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 08:55:05.061222  337651 docker.go:218] disabling cri-docker service (if available) ...
	I0110 08:55:05.061266  337651 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 08:55:05.076828  337651 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 08:55:05.088925  337651 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 08:55:05.169201  337651 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 08:55:05.250358  337651 docker.go:234] disabling docker service ...
	I0110 08:55:05.250421  337651 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 08:55:05.265340  337651 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 08:55:05.277642  337651 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 08:55:05.354970  337651 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 08:55:05.438086  337651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 08:55:05.450523  337651 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 08:55:05.464552  337651 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 08:55:05.464606  337651 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:55:05.473501  337651 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0110 08:55:05.473560  337651 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:55:05.482110  337651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:55:05.490292  337651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:55:05.498788  337651 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 08:55:05.507142  337651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:55:05.515949  337651 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:55:05.524862  337651 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:55:05.533635  337651 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 08:55:05.541045  337651 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 08:55:05.548719  337651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:55:05.628011  337651 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 08:55:05.763111  337651 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 08:55:05.763196  337651 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 08:55:05.767248  337651 start.go:574] Will wait 60s for crictl version
	I0110 08:55:05.767300  337651 ssh_runner.go:195] Run: which crictl
	I0110 08:55:05.770834  337651 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 08:55:05.795545  337651 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 08:55:05.795612  337651 ssh_runner.go:195] Run: crio --version
	I0110 08:55:05.822934  337651 ssh_runner.go:195] Run: crio --version
	I0110 08:55:05.854094  337651 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 08:55:05.855440  337651 cli_runner.go:164] Run: docker network inspect newest-cni-582650 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 08:55:05.874881  337651 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0110 08:55:05.878985  337651 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 08:55:05.890627  337651 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0110 08:55:05.891718  337651 kubeadm.go:884] updating cluster {Name:newest-cni-582650 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-582650 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 08:55:05.891861  337651 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 08:55:05.891935  337651 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 08:55:05.926755  337651 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 08:55:05.926777  337651 crio.go:433] Images already preloaded, skipping extraction
	I0110 08:55:05.926824  337651 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 08:55:05.953234  337651 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 08:55:05.953260  337651 cache_images.go:86] Images are preloaded, skipping loading
	I0110 08:55:05.953268  337651 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0 crio true true} ...
	I0110 08:55:05.953454  337651 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-582650 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-582650 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 08:55:05.953555  337651 ssh_runner.go:195] Run: crio config
	I0110 08:55:05.999327  337651 cni.go:84] Creating CNI manager for ""
	I0110 08:55:05.999360  337651 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 08:55:05.999383  337651 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I0110 08:55:05.999417  337651 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-582650 NodeName:newest-cni-582650 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 08:55:05.999536  337651 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-582650"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 08:55:05.999603  337651 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 08:55:06.008278  337651 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 08:55:06.008353  337651 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 08:55:06.015782  337651 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0110 08:55:06.028209  337651 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 08:55:06.040652  337651 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I0110 08:55:06.053361  337651 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0110 08:55:06.057091  337651 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 08:55:06.067273  337651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:55:06.148919  337651 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 08:55:06.175368  337651 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650 for IP: 192.168.94.2
	I0110 08:55:06.175391  337651 certs.go:195] generating shared ca certs ...
	I0110 08:55:06.175411  337651 certs.go:227] acquiring lock for ca certs: {Name:mk00e261408d0e9fd9be39128613c5110a764de2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:55:06.175572  337651 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-3641/.minikube/ca.key
	I0110 08:55:06.175708  337651 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-3641/.minikube/proxy-client-ca.key
	I0110 08:55:06.175769  337651 certs.go:257] generating profile certs ...
	I0110 08:55:06.175934  337651 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650/client.key
	I0110 08:55:06.176008  337651 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650/apiserver.key.0aa7c905
	I0110 08:55:06.176063  337651 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650/proxy-client.key
	I0110 08:55:06.176203  337651 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/7183.pem (1338 bytes)
	W0110 08:55:06.176248  337651 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-3641/.minikube/certs/7183_empty.pem, impossibly tiny 0 bytes
	I0110 08:55:06.176263  337651 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca-key.pem (1675 bytes)
	I0110 08:55:06.176306  337651 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem (1078 bytes)
	I0110 08:55:06.176343  337651 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/cert.pem (1123 bytes)
	I0110 08:55:06.176377  337651 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/key.pem (1675 bytes)
	I0110 08:55:06.176437  337651 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/files/etc/ssl/certs/71832.pem (1708 bytes)
	I0110 08:55:06.177184  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 08:55:06.196870  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0110 08:55:06.215933  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 08:55:06.235476  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 08:55:06.258185  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0110 08:55:06.277751  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 08:55:06.295268  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 08:55:06.312421  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0110 08:55:06.329617  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/files/etc/ssl/certs/71832.pem --> /usr/share/ca-certificates/71832.pem (1708 bytes)
	I0110 08:55:06.346649  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 08:55:06.364016  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/certs/7183.pem --> /usr/share/ca-certificates/7183.pem (1338 bytes)
	I0110 08:55:06.382003  337651 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 08:55:06.394320  337651 ssh_runner.go:195] Run: openssl version
	I0110 08:55:06.400371  337651 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/71832.pem
	I0110 08:55:06.407685  337651 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/71832.pem /etc/ssl/certs/71832.pem
	I0110 08:55:06.415138  337651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71832.pem
	I0110 08:55:06.419188  337651 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 08:23 /usr/share/ca-certificates/71832.pem
	I0110 08:55:06.419234  337651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71832.pem
	I0110 08:55:06.454164  337651 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 08:55:06.461860  337651 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:55:06.470568  337651 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 08:55:06.478089  337651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:55:06.481724  337651 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:55:06.481786  337651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:55:06.515894  337651 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 08:55:06.523865  337651 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7183.pem
	I0110 08:55:06.531389  337651 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7183.pem /etc/ssl/certs/7183.pem
	I0110 08:55:06.538646  337651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7183.pem
	I0110 08:55:06.542199  337651 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 08:23 /usr/share/ca-certificates/7183.pem
	I0110 08:55:06.542240  337651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7183.pem
	I0110 08:55:06.577649  337651 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 08:55:06.585536  337651 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 08:55:06.589317  337651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 08:55:06.625993  337651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 08:55:06.660607  337651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 08:55:06.701294  337651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 08:55:06.750337  337651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 08:55:06.795920  337651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 08:55:06.844782  337651 kubeadm.go:401] StartCluster: {Name:newest-cni-582650 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-582650 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:55:06.844904  337651 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 08:55:06.844978  337651 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 08:55:06.876617  337651 cri.go:96] found id: "7365649bf838b1d9b8c45dcaa7ce29160f5dd8674c9802b9f0610e314ca173cc"
	I0110 08:55:06.876642  337651 cri.go:96] found id: "ce0d065d2705be147f0cd136ee494369b9a709e0327cb0d06b594a233ab11c96"
	I0110 08:55:06.876646  337651 cri.go:96] found id: "04153603f19d1830f6cad025b9d59e70752a925ea51b14474fc99161af31a6c1"
	I0110 08:55:06.876650  337651 cri.go:96] found id: "90c58c3cd9924aced7da1338afb4ee8fd756be611e2c9aed303c38a538dfbacc"
	I0110 08:55:06.876653  337651 cri.go:96] found id: ""
	I0110 08:55:06.876706  337651 ssh_runner.go:195] Run: sudo runc list -f json
	W0110 08:55:06.889419  337651 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:55:06Z" level=error msg="open /run/runc: no such file or directory"
	I0110 08:55:06.889478  337651 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 08:55:06.897471  337651 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 08:55:06.897491  337651 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 08:55:06.897550  337651 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 08:55:06.905056  337651 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 08:55:06.905848  337651 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-582650" does not appear in /home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:55:06.906229  337651 kubeconfig.go:62] /home/jenkins/minikube-integration/22427-3641/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-582650" cluster setting kubeconfig missing "newest-cni-582650" context setting]
	I0110 08:55:06.906722  337651 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/kubeconfig: {Name:mk0d29b3b0ee1fd71729aff31f901be50a2d0664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:55:06.907996  337651 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 08:55:06.916223  337651 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I0110 08:55:06.916252  337651 kubeadm.go:602] duration metric: took 18.746858ms to restartPrimaryControlPlane
	I0110 08:55:06.916267  337651 kubeadm.go:403] duration metric: took 71.493899ms to StartCluster
	I0110 08:55:06.916288  337651 settings.go:142] acquiring lock: {Name:mkbb32fc6441ceb31ce2923ea8999f8375298f08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:55:06.916352  337651 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:55:06.917032  337651 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/kubeconfig: {Name:mk0d29b3b0ee1fd71729aff31f901be50a2d0664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:55:06.917252  337651 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 08:55:06.917332  337651 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 08:55:06.917423  337651 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-582650"
	I0110 08:55:06.917441  337651 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-582650"
	W0110 08:55:06.917448  337651 addons.go:248] addon storage-provisioner should already be in state true
	I0110 08:55:06.917456  337651 addons.go:70] Setting dashboard=true in profile "newest-cni-582650"
	I0110 08:55:06.917486  337651 addons.go:239] Setting addon dashboard=true in "newest-cni-582650"
	W0110 08:55:06.917500  337651 addons.go:248] addon dashboard should already be in state true
	I0110 08:55:06.917498  337651 config.go:182] Loaded profile config "newest-cni-582650": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:55:06.917505  337651 addons.go:70] Setting default-storageclass=true in profile "newest-cni-582650"
	I0110 08:55:06.917531  337651 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-582650"
	I0110 08:55:06.917545  337651 host.go:66] Checking if "newest-cni-582650" exists ...
	I0110 08:55:06.917487  337651 host.go:66] Checking if "newest-cni-582650" exists ...
	I0110 08:55:06.917888  337651 cli_runner.go:164] Run: docker container inspect newest-cni-582650 --format={{.State.Status}}
	I0110 08:55:06.918065  337651 cli_runner.go:164] Run: docker container inspect newest-cni-582650 --format={{.State.Status}}
	I0110 08:55:06.918090  337651 cli_runner.go:164] Run: docker container inspect newest-cni-582650 --format={{.State.Status}}
	I0110 08:55:06.922752  337651 out.go:179] * Verifying Kubernetes components...
	I0110 08:55:06.924557  337651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:55:06.944895  337651 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 08:55:06.944980  337651 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0110 08:55:06.946125  337651 addons.go:239] Setting addon default-storageclass=true in "newest-cni-582650"
	W0110 08:55:06.946159  337651 addons.go:248] addon default-storageclass should already be in state true
	I0110 08:55:06.946192  337651 host.go:66] Checking if "newest-cni-582650" exists ...
	I0110 08:55:06.946653  337651 cli_runner.go:164] Run: docker container inspect newest-cni-582650 --format={{.State.Status}}
	I0110 08:55:06.946876  337651 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 08:55:06.946895  337651 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 08:55:06.946956  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:06.947943  337651 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0110 08:55:06.949187  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0110 08:55:06.949212  337651 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0110 08:55:06.949272  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:06.979786  337651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/newest-cni-582650/id_rsa Username:docker}
	I0110 08:55:06.982713  337651 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 08:55:06.982757  337651 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 08:55:06.982820  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:06.986338  337651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/newest-cni-582650/id_rsa Username:docker}
	I0110 08:55:07.009531  337651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/newest-cni-582650/id_rsa Username:docker}
	I0110 08:55:07.065163  337651 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 08:55:07.081832  337651 api_server.go:52] waiting for apiserver process to appear ...
	I0110 08:55:07.081898  337651 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 08:55:07.095536  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0110 08:55:07.095562  337651 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0110 08:55:07.096364  337651 api_server.go:72] duration metric: took 179.085582ms to wait for apiserver process to appear ...
	I0110 08:55:07.096384  337651 api_server.go:88] waiting for apiserver healthz status ...
	I0110 08:55:07.096403  337651 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0110 08:55:07.100030  337651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 08:55:07.111493  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0110 08:55:07.111519  337651 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0110 08:55:07.122472  337651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 08:55:07.128466  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0110 08:55:07.128484  337651 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0110 08:55:07.144597  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0110 08:55:07.144620  337651 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0110 08:55:07.160177  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0110 08:55:07.160236  337651 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0110 08:55:07.177064  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0110 08:55:07.177088  337651 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0110 08:55:07.192696  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0110 08:55:07.192723  337651 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0110 08:55:07.207042  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0110 08:55:07.207063  337651 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0110 08:55:07.219547  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 08:55:07.219572  337651 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0110 08:55:07.232446  337651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 08:55:08.397883  337651 api_server.go:325] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0110 08:55:08.397912  337651 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0110 08:55:08.397934  337651 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0110 08:55:08.408043  337651 api_server.go:325] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0110 08:55:08.408134  337651 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0110 08:55:08.597191  337651 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0110 08:55:08.602170  337651 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 08:55:08.602223  337651 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0110 08:55:08.914012  337651 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.813953016s)
	I0110 08:55:08.914115  337651 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.791608298s)
	I0110 08:55:08.914183  337651 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.681703498s)
	I0110 08:55:08.916001  337651 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-582650 addons enable metrics-server
	
	I0110 08:55:08.924789  337651 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I0110 08:55:08.926065  337651 addons.go:530] duration metric: took 2.008739629s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I0110 08:55:09.096869  337651 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0110 08:55:09.101576  337651 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 08:55:09.101606  337651 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0110 08:55:09.597108  337651 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0110 08:55:09.606628  337651 api_server.go:325] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0110 08:55:09.608612  337651 api_server.go:141] control plane version: v1.35.0
	I0110 08:55:09.608678  337651 api_server.go:131] duration metric: took 2.512285395s to wait for apiserver health ...
	I0110 08:55:09.608701  337651 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 08:55:09.612542  337651 system_pods.go:59] 8 kube-system pods found
	I0110 08:55:09.612572  337651 system_pods.go:61] "coredns-7d764666f9-bmscc" [bc0ad55b-bbf6-4898-a38a-7a1a2d154cb3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0110 08:55:09.612580  337651 system_pods.go:61] "etcd-newest-cni-582650" [bb439312-4d17-46e1-9d07-4b972ad2299b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 08:55:09.612587  337651 system_pods.go:61] "kindnet-gp4sj" [c1167720-98b8-4850-a264-11964eb2675d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0110 08:55:09.612599  337651 system_pods.go:61] "kube-apiserver-newest-cni-582650" [947302b1-615d-4f31-976c-039fcf37be97] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 08:55:09.612607  337651 system_pods.go:61] "kube-controller-manager-newest-cni-582650" [c2156827-ae41-4c25-958a-ea329f7adf65] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 08:55:09.612614  337651 system_pods.go:61] "kube-proxy-ldmfv" [02b5ffbb-b52f-4339-bee2-b9400a4714bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0110 08:55:09.612621  337651 system_pods.go:61] "kube-scheduler-newest-cni-582650" [8d788728-c388-42a6-9bcd-9ab2bf3468fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 08:55:09.612634  337651 system_pods.go:61] "storage-provisioner" [349ec60d-a776-479e-b9a0-892989e886eb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0110 08:55:09.612643  337651 system_pods.go:74] duration metric: took 3.926901ms to wait for pod list to return data ...
	I0110 08:55:09.612653  337651 default_sa.go:34] waiting for default service account to be created ...
	I0110 08:55:09.615183  337651 default_sa.go:45] found service account: "default"
	I0110 08:55:09.615208  337651 default_sa.go:55] duration metric: took 2.548851ms for default service account to be created ...
	I0110 08:55:09.615222  337651 kubeadm.go:587] duration metric: took 2.697945894s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0110 08:55:09.615245  337651 node_conditions.go:102] verifying NodePressure condition ...
	I0110 08:55:09.617802  337651 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0110 08:55:09.617836  337651 node_conditions.go:123] node cpu capacity is 8
	I0110 08:55:09.617855  337651 node_conditions.go:105] duration metric: took 2.604361ms to run NodePressure ...
	I0110 08:55:09.617875  337651 start.go:242] waiting for startup goroutines ...
	I0110 08:55:09.617884  337651 start.go:247] waiting for cluster config update ...
	I0110 08:55:09.617898  337651 start.go:256] writing updated cluster config ...
	I0110 08:55:09.618148  337651 ssh_runner.go:195] Run: rm -f paused
	I0110 08:55:09.667016  337651 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0110 08:55:09.669727  337651 out.go:179] * Done! kubectl is now configured to use "newest-cni-582650" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.552415739Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.555460668Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=c1e728ab-f65c-41a0-9650-795e33d34d25 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.556203109Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=6f2acd88-9e09-4739-a3fb-23bbeaaff32b name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.557226411Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.55790479Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.55817598Z" level=info msg="Ran pod sandbox 4facb9f39df6cbf1ac474fa6b2617a3f5e124e7806d5d95dd2162682a8865d5f with infra container: kube-system/kindnet-gp4sj/POD" id=c1e728ab-f65c-41a0-9650-795e33d34d25 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.558717185Z" level=info msg="Ran pod sandbox ae8d846f012dd405959c0496b349ab8b5912462410f06e662eb5e6370d977a06 with infra container: kube-system/kube-proxy-ldmfv/POD" id=6f2acd88-9e09-4739-a3fb-23bbeaaff32b name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.55935734Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=d5bee083-834c-44bb-9afb-424b4283f781 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.560522772Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=707024a3-eefd-4222-94c0-d7194a475dd9 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.560574907Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=9502548d-e6e3-4aca-ade9-90ccce39506f name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.561785578Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=2cc6caf3-a7f6-4a69-ad3c-b6dcc185e30a name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.562071304Z" level=info msg="Creating container: kube-system/kindnet-gp4sj/kindnet-cni" id=6edb84b5-f143-4222-9ff0-c0d390f090ec name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.562176185Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.562643784Z" level=info msg="Creating container: kube-system/kube-proxy-ldmfv/kube-proxy" id=6a98df64-51cf-4416-8cb8-d41306615ea0 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.562837401Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.567005645Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.567613768Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.56982781Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.570496388Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.610553439Z" level=info msg="Created container fe72c29c3cb566af837dc49a7fa9ff51841e0d239b1c0e7672cef1d14b2e2e1e: kube-system/kindnet-gp4sj/kindnet-cni" id=6edb84b5-f143-4222-9ff0-c0d390f090ec name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.611485729Z" level=info msg="Starting container: fe72c29c3cb566af837dc49a7fa9ff51841e0d239b1c0e7672cef1d14b2e2e1e" id=3fa7cc34-d3aa-4f55-9fb4-00b1c8754a4a name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.613635156Z" level=info msg="Started container" PID=1052 containerID=fe72c29c3cb566af837dc49a7fa9ff51841e0d239b1c0e7672cef1d14b2e2e1e description=kube-system/kindnet-gp4sj/kindnet-cni id=3fa7cc34-d3aa-4f55-9fb4-00b1c8754a4a name=/runtime.v1.RuntimeService/StartContainer sandboxID=4facb9f39df6cbf1ac474fa6b2617a3f5e124e7806d5d95dd2162682a8865d5f
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.613792641Z" level=info msg="Created container dafc458b0b0ed2e517d82757911d24c31a8b3d269aea3f699c10d94d701dffe2: kube-system/kube-proxy-ldmfv/kube-proxy" id=6a98df64-51cf-4416-8cb8-d41306615ea0 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.614448142Z" level=info msg="Starting container: dafc458b0b0ed2e517d82757911d24c31a8b3d269aea3f699c10d94d701dffe2" id=b5a071e6-a80b-40d6-8777-1ad8844b00cf name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.617356025Z" level=info msg="Started container" PID=1053 containerID=dafc458b0b0ed2e517d82757911d24c31a8b3d269aea3f699c10d94d701dffe2 description=kube-system/kube-proxy-ldmfv/kube-proxy id=b5a071e6-a80b-40d6-8777-1ad8844b00cf name=/runtime.v1.RuntimeService/StartContainer sandboxID=ae8d846f012dd405959c0496b349ab8b5912462410f06e662eb5e6370d977a06
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	dafc458b0b0ed       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8   3 seconds ago       Running             kube-proxy                1                   ae8d846f012dd       kube-proxy-ldmfv                            kube-system
	fe72c29c3cb56       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251   3 seconds ago       Running             kindnet-cni               1                   4facb9f39df6c       kindnet-gp4sj                               kube-system
	7365649bf838b       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc   6 seconds ago       Running             kube-scheduler            1                   4e5e08379238e       kube-scheduler-newest-cni-582650            kube-system
	ce0d065d2705b       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508   6 seconds ago       Running             kube-controller-manager   1                   866357ecdff7e       kube-controller-manager-newest-cni-582650   kube-system
	04153603f19d1       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499   6 seconds ago       Running             kube-apiserver            1                   8a540ff2b8b09       kube-apiserver-newest-cni-582650            kube-system
	90c58c3cd9924       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2   6 seconds ago       Running             etcd                      1                   6cfd045b0894b       etcd-newest-cni-582650                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-582650
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-582650
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee
	                    minikube.k8s.io/name=newest-cni-582650
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T08_54_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 08:54:46 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-582650
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 08:55:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 08:55:08 +0000   Sat, 10 Jan 2026 08:54:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 08:55:08 +0000   Sat, 10 Jan 2026 08:54:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 08:55:08 +0000   Sat, 10 Jan 2026 08:54:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 10 Jan 2026 08:55:08 +0000   Sat, 10 Jan 2026 08:54:46 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-582650
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 73d0d7ed7bc783a051b563196960b035
	  System UUID:                31447831-8276-4e9c-bb29-38ef2ce553ce
	  Boot ID:                    7c12f492-59b6-440b-986c-741ece916a23
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-582650                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         24s
	  kube-system                 kindnet-gp4sj                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19s
	  kube-system                 kube-apiserver-newest-cni-582650             250m (3%)     0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-controller-manager-newest-cni-582650    200m (2%)     0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-proxy-ldmfv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         19s
	  kube-system                 kube-scheduler-newest-cni-582650             100m (1%)     0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  20s   node-controller  Node newest-cni-582650 event: Registered Node newest-cni-582650 in Controller
	  Normal  RegisteredNode  2s    node-controller  Node newest-cni-582650 event: Registered Node newest-cni-582650 in Controller
	
	
	==> dmesg <==
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 4a 03 46 88 66 4e 08 06
	[  +6.406566] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7a 4f c9 a1 45 f4 08 06
	[  +0.001429] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca 5b 5b a9 7c eb 08 06
	[ +24.002020] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7a 6f 49 7e 9d d1 08 06
	[  +0.000366] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e 3a ac 18 98 c9 08 06
	[Jan10 08:52] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4a 05 5c 86 cf df 08 06
	[  +0.000337] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 7a 4f c9 a1 45 f4 08 06
	[  +0.000602] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca 5b 5b a9 7c eb 08 06
	[  +0.739059] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e 32 81 dc 0a 7c 08 06
	[  +1.988700] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 6e ec e4 aa 3d 08 06
	[ +20.040962] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 ac f8 cf fb 84 08 06
	[  +0.000375] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 6e ec e4 aa 3d 08 06
	[  +0.000915] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 32 81 dc 0a 7c 08 06
	
	
	==> etcd [90c58c3cd9924aced7da1338afb4ee8fd756be611e2c9aed303c38a538dfbacc] <==
	{"level":"info","ts":"2026-01-10T08:55:06.839476Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2026-01-10T08:55:06.839536Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2026-01-10T08:55:06.839375Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2026-01-10T08:55:06.839640Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T08:55:06.839716Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-10T08:55:06.839713Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-10T08:55:06.839768Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-10T08:55:07.528377Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T08:55:07.528426Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T08:55:07.528471Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2026-01-10T08:55:07.528485Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T08:55:07.528505Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T08:55:07.529370Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2026-01-10T08:55:07.529453Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T08:55:07.529492Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2026-01-10T08:55:07.529507Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2026-01-10T08:55:07.530344Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T08:55:07.530346Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:newest-cni-582650 ClientURLs:[https://192.168.94.2:2379]}","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T08:55:07.530370Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T08:55:07.530579Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T08:55:07.530607Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T08:55:07.532049Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T08:55:07.532284Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T08:55:07.534613Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T08:55:07.534675Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	
	
	==> kernel <==
	 08:55:13 up 37 min,  0 user,  load average: 5.12, 4.32, 2.81
	Linux newest-cni-582650 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fe72c29c3cb566af837dc49a7fa9ff51841e0d239b1c0e7672cef1d14b2e2e1e] <==
	I0110 08:55:09.882201       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 08:55:09.882479       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I0110 08:55:09.882593       1 main.go:148] setting mtu 1500 for CNI 
	I0110 08:55:09.882613       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 08:55:09.882644       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T08:55:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 08:55:10.081377       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 08:55:10.081427       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 08:55:10.081440       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 08:55:10.081591       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 08:55:10.481600       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 08:55:10.481630       1 metrics.go:72] Registering metrics
	I0110 08:55:10.481708       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [04153603f19d1830f6cad025b9d59e70752a925ea51b14474fc99161af31a6c1] <==
	I0110 08:55:08.482266       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0110 08:55:08.482155       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0110 08:55:08.482526       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0110 08:55:08.482558       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0110 08:55:08.482702       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0110 08:55:08.482674       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:08.482661       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0110 08:55:08.488672       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E0110 08:55:08.488851       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0110 08:55:08.489592       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0110 08:55:08.497484       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:08.497505       1 policy_source.go:248] refreshing policies
	I0110 08:55:08.516744       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 08:55:08.727081       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 08:55:08.757857       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 08:55:08.775581       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 08:55:08.781932       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 08:55:08.791029       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 08:55:08.820133       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.135.235"}
	I0110 08:55:08.829806       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.134.177"}
	I0110 08:55:09.385656       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 08:55:12.023535       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 08:55:12.023591       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 08:55:12.173018       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 08:55:12.223217       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [ce0d065d2705be147f0cd136ee494369b9a709e0327cb0d06b594a233ab11c96] <==
	I0110 08:55:11.629159       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.629159       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.629850       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.630071       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.630124       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.630163       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.630567       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.631328       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.631595       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.631646       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.632464       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.632535       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.632628       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:55:11.633510       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.633801       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.633822       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.633807       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.634057       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.634060       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.634104       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.642103       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.732800       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.733912       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.733934       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 08:55:11.733940       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [dafc458b0b0ed2e517d82757911d24c31a8b3d269aea3f699c10d94d701dffe2] <==
	I0110 08:55:09.655828       1 server_linux.go:53] "Using iptables proxy"
	I0110 08:55:09.716118       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:55:09.816640       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:09.816687       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E0110 08:55:09.816804       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 08:55:09.837888       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 08:55:09.837945       1 server_linux.go:136] "Using iptables Proxier"
	I0110 08:55:09.843277       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 08:55:09.843688       1 server.go:529] "Version info" version="v1.35.0"
	I0110 08:55:09.843712       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 08:55:09.845508       1 config.go:106] "Starting endpoint slice config controller"
	I0110 08:55:09.845533       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 08:55:09.845561       1 config.go:200] "Starting service config controller"
	I0110 08:55:09.845566       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 08:55:09.845618       1 config.go:309] "Starting node config controller"
	I0110 08:55:09.845630       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 08:55:09.845638       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 08:55:09.845750       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 08:55:09.845789       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 08:55:09.945787       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 08:55:09.945798       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0110 08:55:09.946087       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [7365649bf838b1d9b8c45dcaa7ce29160f5dd8674c9802b9f0610e314ca173cc] <==
	I0110 08:55:07.255033       1 serving.go:386] Generated self-signed cert in-memory
	W0110 08:55:08.396084       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0110 08:55:08.396120       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0110 08:55:08.396132       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0110 08:55:08.396141       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0110 08:55:08.436595       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0110 08:55:08.436627       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 08:55:08.438593       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 08:55:08.438639       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:55:08.439127       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0110 08:55:08.439742       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0110 08:55:08.539712       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 08:55:08 newest-cni-582650 kubelet[670]: I0110 08:55:08.559866     670 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-582650"
	Jan 10 08:55:08 newest-cni-582650 kubelet[670]: E0110 08:55:08.565666     670 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-582650\" already exists" pod="kube-system/kube-scheduler-newest-cni-582650"
	Jan 10 08:55:08 newest-cni-582650 kubelet[670]: I0110 08:55:08.565707     670 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-582650"
	Jan 10 08:55:08 newest-cni-582650 kubelet[670]: E0110 08:55:08.571783     670 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-582650\" already exists" pod="kube-system/etcd-newest-cni-582650"
	Jan 10 08:55:08 newest-cni-582650 kubelet[670]: I0110 08:55:08.571821     670 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-582650"
	Jan 10 08:55:08 newest-cni-582650 kubelet[670]: E0110 08:55:08.577234     670 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-582650\" already exists" pod="kube-system/kube-apiserver-newest-cni-582650"
	Jan 10 08:55:09 newest-cni-582650 kubelet[670]: I0110 08:55:09.241377     670 apiserver.go:52] "Watching apiserver"
	Jan 10 08:55:09 newest-cni-582650 kubelet[670]: E0110 08:55:09.246118     670 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-582650" containerName="kube-controller-manager"
	Jan 10 08:55:09 newest-cni-582650 kubelet[670]: I0110 08:55:09.247661     670 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Jan 10 08:55:09 newest-cni-582650 kubelet[670]: E0110 08:55:09.289778     670 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-582650" containerName="etcd"
	Jan 10 08:55:09 newest-cni-582650 kubelet[670]: I0110 08:55:09.290937     670 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-582650"
	Jan 10 08:55:09 newest-cni-582650 kubelet[670]: E0110 08:55:09.291257     670 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-582650" containerName="kube-apiserver"
	Jan 10 08:55:09 newest-cni-582650 kubelet[670]: E0110 08:55:09.296501     670 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-582650\" already exists" pod="kube-system/kube-scheduler-newest-cni-582650"
	Jan 10 08:55:09 newest-cni-582650 kubelet[670]: E0110 08:55:09.296585     670 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-582650" containerName="kube-scheduler"
	Jan 10 08:55:09 newest-cni-582650 kubelet[670]: I0110 08:55:09.320910     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c1167720-98b8-4850-a264-11964eb2675d-cni-cfg\") pod \"kindnet-gp4sj\" (UID: \"c1167720-98b8-4850-a264-11964eb2675d\") " pod="kube-system/kindnet-gp4sj"
	Jan 10 08:55:09 newest-cni-582650 kubelet[670]: I0110 08:55:09.320976     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1167720-98b8-4850-a264-11964eb2675d-lib-modules\") pod \"kindnet-gp4sj\" (UID: \"c1167720-98b8-4850-a264-11964eb2675d\") " pod="kube-system/kindnet-gp4sj"
	Jan 10 08:55:09 newest-cni-582650 kubelet[670]: I0110 08:55:09.321042     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02b5ffbb-b52f-4339-bee2-b9400a4714bd-xtables-lock\") pod \"kube-proxy-ldmfv\" (UID: \"02b5ffbb-b52f-4339-bee2-b9400a4714bd\") " pod="kube-system/kube-proxy-ldmfv"
	Jan 10 08:55:09 newest-cni-582650 kubelet[670]: I0110 08:55:09.321065     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02b5ffbb-b52f-4339-bee2-b9400a4714bd-lib-modules\") pod \"kube-proxy-ldmfv\" (UID: \"02b5ffbb-b52f-4339-bee2-b9400a4714bd\") " pod="kube-system/kube-proxy-ldmfv"
	Jan 10 08:55:09 newest-cni-582650 kubelet[670]: I0110 08:55:09.321105     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1167720-98b8-4850-a264-11964eb2675d-xtables-lock\") pod \"kindnet-gp4sj\" (UID: \"c1167720-98b8-4850-a264-11964eb2675d\") " pod="kube-system/kindnet-gp4sj"
	Jan 10 08:55:10 newest-cni-582650 kubelet[670]: E0110 08:55:10.289023     670 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-582650" containerName="etcd"
	Jan 10 08:55:10 newest-cni-582650 kubelet[670]: E0110 08:55:10.289279     670 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-582650" containerName="kube-scheduler"
	Jan 10 08:55:10 newest-cni-582650 kubelet[670]: I0110 08:55:10.682935     670 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Jan 10 08:55:10 newest-cni-582650 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 08:55:10 newest-cni-582650 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 08:55:10 newest-cni-582650 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-582650 -n newest-cni-582650
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-582650 -n newest-cni-582650: exit status 2 (350.700224ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-582650 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-bmscc storage-provisioner dashboard-metrics-scraper-867fb5f87b-b99c5 kubernetes-dashboard-b84665fb8-hnzwl
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-582650 describe pod coredns-7d764666f9-bmscc storage-provisioner dashboard-metrics-scraper-867fb5f87b-b99c5 kubernetes-dashboard-b84665fb8-hnzwl
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-582650 describe pod coredns-7d764666f9-bmscc storage-provisioner dashboard-metrics-scraper-867fb5f87b-b99c5 kubernetes-dashboard-b84665fb8-hnzwl: exit status 1 (71.590108ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-bmscc" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-b99c5" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-hnzwl" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-582650 describe pod coredns-7d764666f9-bmscc storage-provisioner dashboard-metrics-scraper-867fb5f87b-b99c5 kubernetes-dashboard-b84665fb8-hnzwl: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-582650
helpers_test.go:244: (dbg) docker inspect newest-cni-582650:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4dbf07d4b1628a1acd5c2e257abf85d859efa39216a44d32788ff5dc3a82de51",
	        "Created": "2026-01-10T08:54:34.771145794Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 337849,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2026-01-10T08:55:00.268116687Z",
	            "FinishedAt": "2026-01-10T08:54:59.391516804Z"
	        },
	        "Image": "sha256:e8ee619afcf8e9008c4fe3e7f2aba5fdbe7a9f0765053ca7dd53ab0df8ab02a5",
	        "ResolvConfPath": "/var/lib/docker/containers/4dbf07d4b1628a1acd5c2e257abf85d859efa39216a44d32788ff5dc3a82de51/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4dbf07d4b1628a1acd5c2e257abf85d859efa39216a44d32788ff5dc3a82de51/hostname",
	        "HostsPath": "/var/lib/docker/containers/4dbf07d4b1628a1acd5c2e257abf85d859efa39216a44d32788ff5dc3a82de51/hosts",
	        "LogPath": "/var/lib/docker/containers/4dbf07d4b1628a1acd5c2e257abf85d859efa39216a44d32788ff5dc3a82de51/4dbf07d4b1628a1acd5c2e257abf85d859efa39216a44d32788ff5dc3a82de51-json.log",
	        "Name": "/newest-cni-582650",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-582650:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-582650",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4dbf07d4b1628a1acd5c2e257abf85d859efa39216a44d32788ff5dc3a82de51",
	                "LowerDir": "/var/lib/docker/overlay2/cecf9ccbd369e95c2f1fad3e86ddbefa88377e415cb790180c787df246182877-init/diff:/var/lib/docker/overlay2/97b14eb520192356c991915ce74cd6094e6ef948f398ac688b667ea18491ccde/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cecf9ccbd369e95c2f1fad3e86ddbefa88377e415cb790180c787df246182877/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cecf9ccbd369e95c2f1fad3e86ddbefa88377e415cb790180c787df246182877/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cecf9ccbd369e95c2f1fad3e86ddbefa88377e415cb790180c787df246182877/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-582650",
	                "Source": "/var/lib/docker/volumes/newest-cni-582650/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-582650",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-582650",
	                "name.minikube.sigs.k8s.io": "newest-cni-582650",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b890a07c2b1d3b75264d0fa2fafaa9072018523e7719c3bc0cbd2311f467df07",
	            "SandboxKey": "/var/run/docker/netns/b890a07c2b1d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-582650": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "075f874b6857901f9e3f8b443cec464881c99fdb29213454b1860411dcc7e5ce",
	                    "EndpointID": "7e12442882becd0197f48f91aa7ee55a5f68b509c45969230c5a527c0ebbd7c0",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "7a:9f:1d:83:6d:0f",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-582650",
	                        "4dbf07d4b162"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-582650 -n newest-cni-582650
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-582650 -n newest-cni-582650: exit status 2 (334.721758ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-582650 logs -n 25
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │              PROFILE              │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ delete  │ -p old-k8s-version-093083                                                                                                                                                                                                                     │ old-k8s-version-093083            │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ image   │ no-preload-095312 image list --format=json                                                                                                                                                                                                    │ no-preload-095312                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ pause   │ -p no-preload-095312 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-095312                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ delete  │ -p old-k8s-version-093083                                                                                                                                                                                                                     │ old-k8s-version-093083            │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p newest-cni-582650 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-582650                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ delete  │ -p no-preload-095312                                                                                                                                                                                                                          │ no-preload-095312                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ delete  │ -p no-preload-095312                                                                                                                                                                                                                          │ no-preload-095312                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p test-preload-dl-gcs-424382 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                        │ test-preload-dl-gcs-424382        │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-424382                                                                                                                                                                                                                 │ test-preload-dl-gcs-424382        │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p test-preload-dl-github-434342 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                  │ test-preload-dl-github-434342     │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ image   │ embed-certs-072273 image list --format=json                                                                                                                                                                                                   │ embed-certs-072273                │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ pause   │ -p embed-certs-072273 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-072273                │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ delete  │ -p test-preload-dl-github-434342                                                                                                                                                                                                              │ test-preload-dl-github-434342     │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p test-preload-dl-gcs-cached-077581 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio                                                                 │ test-preload-dl-gcs-cached-077581 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ delete  │ -p test-preload-dl-gcs-cached-077581                                                                                                                                                                                                          │ test-preload-dl-gcs-cached-077581 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ delete  │ -p embed-certs-072273                                                                                                                                                                                                                         │ embed-certs-072273                │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ delete  │ -p embed-certs-072273                                                                                                                                                                                                                         │ embed-certs-072273                │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ addons  │ enable metrics-server -p newest-cni-582650 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-582650                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │                     │
	│ stop    │ -p newest-cni-582650 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-582650                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ addons  │ enable dashboard -p newest-cni-582650 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-582650                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:54 UTC │ 10 Jan 26 08:54 UTC │
	│ start   │ -p newest-cni-582650 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0 │ newest-cni-582650                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:55 UTC │ 10 Jan 26 08:55 UTC │
	│ image   │ default-k8s-diff-port-225354 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-225354      │ jenkins │ v1.37.0 │ 10 Jan 26 08:55 UTC │ 10 Jan 26 08:55 UTC │
	│ pause   │ -p default-k8s-diff-port-225354 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-225354      │ jenkins │ v1.37.0 │ 10 Jan 26 08:55 UTC │                     │
	│ image   │ newest-cni-582650 image list --format=json                                                                                                                                                                                                    │ newest-cni-582650                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:55 UTC │ 10 Jan 26 08:55 UTC │
	│ pause   │ -p newest-cni-582650 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-582650                 │ jenkins │ v1.37.0 │ 10 Jan 26 08:55 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 08:55:00
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 08:55:00.043379  337651 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:55:00.043525  337651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:55:00.043538  337651 out.go:374] Setting ErrFile to fd 2...
	I0110 08:55:00.043544  337651 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:55:00.043842  337651 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:55:00.044396  337651 out.go:368] Setting JSON to false
	I0110 08:55:00.045548  337651 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2252,"bootTime":1768033048,"procs":261,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 08:55:00.045600  337651 start.go:143] virtualization: kvm guest
	I0110 08:55:00.047536  337651 out.go:179] * [newest-cni-582650] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 08:55:00.049141  337651 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 08:55:00.049136  337651 notify.go:221] Checking for updates...
	I0110 08:55:00.051578  337651 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 08:55:00.052772  337651 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:55:00.054143  337651 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-3641/.minikube
	I0110 08:55:00.055504  337651 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 08:55:00.056874  337651 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 08:55:00.058529  337651 config.go:182] Loaded profile config "newest-cni-582650": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:55:00.059052  337651 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 08:55:00.083180  337651 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 08:55:00.083261  337651 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:55:00.139318  337651 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2026-01-10 08:55:00.129647485 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:55:00.139469  337651 docker.go:319] overlay module found
	I0110 08:55:00.141276  337651 out.go:179] * Using the docker driver based on existing profile
	I0110 08:55:00.142458  337651 start.go:309] selected driver: docker
	I0110 08:55:00.142480  337651 start.go:928] validating driver "docker" against &{Name:newest-cni-582650 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-582650 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:55:00.142582  337651 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 08:55:00.143267  337651 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:55:00.197877  337651 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:54 SystemTime:2026-01-10 08:55:00.188806511 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:55:00.198241  337651 start_flags.go:1038] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0110 08:55:00.198276  337651 cni.go:84] Creating CNI manager for ""
	I0110 08:55:00.198348  337651 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 08:55:00.198405  337651 start.go:353] cluster config:
	{Name:newest-cni-582650 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-582650 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:55:00.200031  337651 out.go:179] * Starting "newest-cni-582650" primary control-plane node in "newest-cni-582650" cluster
	I0110 08:55:00.201239  337651 cache.go:134] Beginning downloading kic base image for docker with crio
	I0110 08:55:00.202384  337651 out.go:179] * Pulling base image v0.0.48-1767944074-22401 ...
	I0110 08:55:00.203414  337651 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 08:55:00.203449  337651 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
	I0110 08:55:00.203455  337651 cache.go:65] Caching tarball of preloaded images
	I0110 08:55:00.203502  337651 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon
	I0110 08:55:00.203549  337651 preload.go:251] Found /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0110 08:55:00.203565  337651 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0 on crio
	I0110 08:55:00.203687  337651 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650/config.json ...
	I0110 08:55:00.223996  337651 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 in local docker daemon, skipping pull
	I0110 08:55:00.224013  337651 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 exists in daemon, skipping load
	I0110 08:55:00.224029  337651 cache.go:243] Successfully downloaded all kic artifacts
	I0110 08:55:00.224057  337651 start.go:360] acquireMachinesLock for newest-cni-582650: {Name:mk8a366cb6a19cf5fbfd56cf9cfee17123f828e4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0110 08:55:00.224121  337651 start.go:364] duration metric: took 36.014µs to acquireMachinesLock for "newest-cni-582650"
	I0110 08:55:00.224137  337651 start.go:96] Skipping create...Using existing machine configuration
	I0110 08:55:00.224141  337651 fix.go:54] fixHost starting: 
	I0110 08:55:00.224354  337651 cli_runner.go:164] Run: docker container inspect newest-cni-582650 --format={{.State.Status}}
	I0110 08:55:00.241369  337651 fix.go:112] recreateIfNeeded on newest-cni-582650: state=Stopped err=<nil>
	W0110 08:55:00.241406  337651 fix.go:138] unexpected machine state, will restart: <nil>
	I0110 08:55:00.243298  337651 out.go:252] * Restarting existing docker container for "newest-cni-582650" ...
	I0110 08:55:00.243356  337651 cli_runner.go:164] Run: docker start newest-cni-582650
	I0110 08:55:00.486349  337651 cli_runner.go:164] Run: docker container inspect newest-cni-582650 --format={{.State.Status}}
	I0110 08:55:00.505501  337651 kic.go:430] container "newest-cni-582650" state is running.
	I0110 08:55:00.505877  337651 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-582650
	I0110 08:55:00.524765  337651 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650/config.json ...
	I0110 08:55:00.525042  337651 machine.go:94] provisionDockerMachine start ...
	I0110 08:55:00.525107  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:00.544567  337651 main.go:144] libmachine: Using SSH client type: native
	I0110 08:55:00.544832  337651 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0110 08:55:00.544847  337651 main.go:144] libmachine: About to run SSH command:
	hostname
	I0110 08:55:00.545519  337651 main.go:144] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40212->127.0.0.1:33133: read: connection reset by peer
	I0110 08:55:03.674623  337651 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-582650
	
	I0110 08:55:03.674651  337651 ubuntu.go:182] provisioning hostname "newest-cni-582650"
	I0110 08:55:03.674704  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:03.692657  337651 main.go:144] libmachine: Using SSH client type: native
	I0110 08:55:03.692890  337651 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0110 08:55:03.692907  337651 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-582650 && echo "newest-cni-582650" | sudo tee /etc/hostname
	I0110 08:55:03.828409  337651 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-582650
	
	I0110 08:55:03.828473  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:03.846317  337651 main.go:144] libmachine: Using SSH client type: native
	I0110 08:55:03.846526  337651 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0110 08:55:03.846543  337651 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-582650' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-582650/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-582650' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0110 08:55:03.973261  337651 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I0110 08:55:03.973293  337651 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22427-3641/.minikube CaCertPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22427-3641/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22427-3641/.minikube}
	I0110 08:55:03.973332  337651 ubuntu.go:190] setting up certificates
	I0110 08:55:03.973353  337651 provision.go:84] configureAuth start
	I0110 08:55:03.973412  337651 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-582650
	I0110 08:55:03.991962  337651 provision.go:143] copyHostCerts
	I0110 08:55:03.992035  337651 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-3641/.minikube/ca.pem, removing ...
	I0110 08:55:03.992063  337651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-3641/.minikube/ca.pem
	I0110 08:55:03.992169  337651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22427-3641/.minikube/ca.pem (1078 bytes)
	I0110 08:55:03.992344  337651 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-3641/.minikube/cert.pem, removing ...
	I0110 08:55:03.992367  337651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-3641/.minikube/cert.pem
	I0110 08:55:03.992428  337651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22427-3641/.minikube/cert.pem (1123 bytes)
	I0110 08:55:03.992533  337651 exec_runner.go:144] found /home/jenkins/minikube-integration/22427-3641/.minikube/key.pem, removing ...
	I0110 08:55:03.992544  337651 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22427-3641/.minikube/key.pem
	I0110 08:55:03.992585  337651 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22427-3641/.minikube/key.pem (1675 bytes)
	I0110 08:55:03.992659  337651 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22427-3641/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca-key.pem org=jenkins.newest-cni-582650 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-582650]
	I0110 08:55:04.081124  337651 provision.go:177] copyRemoteCerts
	I0110 08:55:04.081206  337651 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0110 08:55:04.081249  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:04.100529  337651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/newest-cni-582650/id_rsa Username:docker}
	I0110 08:55:04.194315  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0110 08:55:04.211927  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0110 08:55:04.229325  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0110 08:55:04.246098  337651 provision.go:87] duration metric: took 272.723804ms to configureAuth
	I0110 08:55:04.246123  337651 ubuntu.go:206] setting minikube options for container-runtime
	I0110 08:55:04.246301  337651 config.go:182] Loaded profile config "newest-cni-582650": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:55:04.246422  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:04.265307  337651 main.go:144] libmachine: Using SSH client type: native
	I0110 08:55:04.265532  337651 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x9087e0] 0x90b480 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I0110 08:55:04.265554  337651 main.go:144] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0110 08:55:04.543910  337651 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0110 08:55:04.543936  337651 machine.go:97] duration metric: took 4.018877882s to provisionDockerMachine
	I0110 08:55:04.543951  337651 start.go:293] postStartSetup for "newest-cni-582650" (driver="docker")
	I0110 08:55:04.543965  337651 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0110 08:55:04.544023  337651 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0110 08:55:04.544069  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:04.562427  337651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/newest-cni-582650/id_rsa Username:docker}
	I0110 08:55:04.656029  337651 ssh_runner.go:195] Run: cat /etc/os-release
	I0110 08:55:04.659421  337651 main.go:144] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0110 08:55:04.659453  337651 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I0110 08:55:04.659466  337651 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-3641/.minikube/addons for local assets ...
	I0110 08:55:04.659517  337651 filesync.go:126] Scanning /home/jenkins/minikube-integration/22427-3641/.minikube/files for local assets ...
	I0110 08:55:04.659609  337651 filesync.go:149] local asset: /home/jenkins/minikube-integration/22427-3641/.minikube/files/etc/ssl/certs/71832.pem -> 71832.pem in /etc/ssl/certs
	I0110 08:55:04.659755  337651 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0110 08:55:04.668433  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/files/etc/ssl/certs/71832.pem --> /etc/ssl/certs/71832.pem (1708 bytes)
	I0110 08:55:04.685867  337651 start.go:296] duration metric: took 141.902418ms for postStartSetup
	I0110 08:55:04.685949  337651 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 08:55:04.686014  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:04.704239  337651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/newest-cni-582650/id_rsa Username:docker}
	I0110 08:55:04.794956  337651 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0110 08:55:04.799376  337651 fix.go:56] duration metric: took 4.575228964s for fixHost
	I0110 08:55:04.799403  337651 start.go:83] releasing machines lock for "newest-cni-582650", held for 4.575271886s
	I0110 08:55:04.799453  337651 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-582650
	I0110 08:55:04.817146  337651 ssh_runner.go:195] Run: cat /version.json
	I0110 08:55:04.817199  337651 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0110 08:55:04.817280  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:04.817203  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:04.836895  337651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/newest-cni-582650/id_rsa Username:docker}
	I0110 08:55:04.837570  337651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/newest-cni-582650/id_rsa Username:docker}
	I0110 08:55:04.980302  337651 ssh_runner.go:195] Run: systemctl --version
	I0110 08:55:04.986927  337651 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0110 08:55:05.021964  337651 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0110 08:55:05.026769  337651 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0110 08:55:05.026837  337651 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0110 08:55:05.035076  337651 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0110 08:55:05.035124  337651 start.go:496] detecting cgroup driver to use...
	I0110 08:55:05.035171  337651 detect.go:178] detected "systemd" cgroup driver on host os
	I0110 08:55:05.035219  337651 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0110 08:55:05.049316  337651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0110 08:55:05.061222  337651 docker.go:218] disabling cri-docker service (if available) ...
	I0110 08:55:05.061266  337651 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0110 08:55:05.076828  337651 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0110 08:55:05.088925  337651 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0110 08:55:05.169201  337651 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0110 08:55:05.250358  337651 docker.go:234] disabling docker service ...
	I0110 08:55:05.250421  337651 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0110 08:55:05.265340  337651 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0110 08:55:05.277642  337651 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0110 08:55:05.354970  337651 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0110 08:55:05.438086  337651 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0110 08:55:05.450523  337651 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0110 08:55:05.464552  337651 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0110 08:55:05.464606  337651 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:55:05.473501  337651 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0110 08:55:05.473560  337651 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:55:05.482110  337651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:55:05.490292  337651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:55:05.498788  337651 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0110 08:55:05.507142  337651 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:55:05.515949  337651 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:55:05.524862  337651 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0110 08:55:05.533635  337651 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0110 08:55:05.541045  337651 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0110 08:55:05.548719  337651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:55:05.628011  337651 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0110 08:55:05.763111  337651 start.go:553] Will wait 60s for socket path /var/run/crio/crio.sock
	I0110 08:55:05.763196  337651 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0110 08:55:05.767248  337651 start.go:574] Will wait 60s for crictl version
	I0110 08:55:05.767300  337651 ssh_runner.go:195] Run: which crictl
	I0110 08:55:05.770834  337651 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I0110 08:55:05.795545  337651 start.go:590] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.35.0
	RuntimeApiVersion:  v1
	I0110 08:55:05.795612  337651 ssh_runner.go:195] Run: crio --version
	I0110 08:55:05.822934  337651 ssh_runner.go:195] Run: crio --version
	I0110 08:55:05.854094  337651 out.go:179] * Preparing Kubernetes v1.35.0 on CRI-O 1.35.0 ...
	I0110 08:55:05.855440  337651 cli_runner.go:164] Run: docker network inspect newest-cni-582650 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0110 08:55:05.874881  337651 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0110 08:55:05.878985  337651 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 08:55:05.890627  337651 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0110 08:55:05.891718  337651 kubeadm.go:884] updating cluster {Name:newest-cni-582650 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-582650 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false} ...
	I0110 08:55:05.891861  337651 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
	I0110 08:55:05.891935  337651 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 08:55:05.926755  337651 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 08:55:05.926777  337651 crio.go:433] Images already preloaded, skipping extraction
	I0110 08:55:05.926824  337651 ssh_runner.go:195] Run: sudo crictl images --output json
	I0110 08:55:05.953234  337651 crio.go:561] all images are preloaded for cri-o runtime.
	I0110 08:55:05.953260  337651 cache_images.go:86] Images are preloaded, skipping loading
	I0110 08:55:05.953268  337651 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0 crio true true} ...
	I0110 08:55:05.953454  337651 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-582650 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0 ClusterName:newest-cni-582650 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0110 08:55:05.953555  337651 ssh_runner.go:195] Run: crio config
	I0110 08:55:05.999327  337651 cni.go:84] Creating CNI manager for ""
	I0110 08:55:05.999360  337651 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0110 08:55:05.999383  337651 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I0110 08:55:05.999417  337651 kubeadm.go:197] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-582650 NodeName:newest-cni-582650 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0110 08:55:05.999536  337651 kubeadm.go:203] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-582650"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0110 08:55:05.999603  337651 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0
	I0110 08:55:06.008278  337651 binaries.go:51] Found k8s binaries, skipping transfer
	I0110 08:55:06.008353  337651 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0110 08:55:06.015782  337651 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I0110 08:55:06.028209  337651 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0110 08:55:06.040652  337651 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I0110 08:55:06.053361  337651 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0110 08:55:06.057091  337651 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0110 08:55:06.067273  337651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:55:06.148919  337651 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 08:55:06.175368  337651 certs.go:69] Setting up /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650 for IP: 192.168.94.2
	I0110 08:55:06.175391  337651 certs.go:195] generating shared ca certs ...
	I0110 08:55:06.175411  337651 certs.go:227] acquiring lock for ca certs: {Name:mk00e261408d0e9fd9be39128613c5110a764de2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:55:06.175572  337651 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22427-3641/.minikube/ca.key
	I0110 08:55:06.175708  337651 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22427-3641/.minikube/proxy-client-ca.key
	I0110 08:55:06.175769  337651 certs.go:257] generating profile certs ...
	I0110 08:55:06.175934  337651 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650/client.key
	I0110 08:55:06.176008  337651 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650/apiserver.key.0aa7c905
	I0110 08:55:06.176063  337651 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650/proxy-client.key
	I0110 08:55:06.176203  337651 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/7183.pem (1338 bytes)
	W0110 08:55:06.176248  337651 certs.go:480] ignoring /home/jenkins/minikube-integration/22427-3641/.minikube/certs/7183_empty.pem, impossibly tiny 0 bytes
	I0110 08:55:06.176263  337651 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca-key.pem (1675 bytes)
	I0110 08:55:06.176306  337651 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/ca.pem (1078 bytes)
	I0110 08:55:06.176343  337651 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/cert.pem (1123 bytes)
	I0110 08:55:06.176377  337651 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/certs/key.pem (1675 bytes)
	I0110 08:55:06.176437  337651 certs.go:484] found cert: /home/jenkins/minikube-integration/22427-3641/.minikube/files/etc/ssl/certs/71832.pem (1708 bytes)
	I0110 08:55:06.177184  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0110 08:55:06.196870  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0110 08:55:06.215933  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0110 08:55:06.235476  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0110 08:55:06.258185  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0110 08:55:06.277751  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0110 08:55:06.295268  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0110 08:55:06.312421  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/newest-cni-582650/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0110 08:55:06.329617  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/files/etc/ssl/certs/71832.pem --> /usr/share/ca-certificates/71832.pem (1708 bytes)
	I0110 08:55:06.346649  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0110 08:55:06.364016  337651 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22427-3641/.minikube/certs/7183.pem --> /usr/share/ca-certificates/7183.pem (1338 bytes)
	I0110 08:55:06.382003  337651 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I0110 08:55:06.394320  337651 ssh_runner.go:195] Run: openssl version
	I0110 08:55:06.400371  337651 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/71832.pem
	I0110 08:55:06.407685  337651 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/71832.pem /etc/ssl/certs/71832.pem
	I0110 08:55:06.415138  337651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/71832.pem
	I0110 08:55:06.419188  337651 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jan 10 08:23 /usr/share/ca-certificates/71832.pem
	I0110 08:55:06.419234  337651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/71832.pem
	I0110 08:55:06.454164  337651 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I0110 08:55:06.461860  337651 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:55:06.470568  337651 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I0110 08:55:06.478089  337651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:55:06.481724  337651 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jan 10 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:55:06.481786  337651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0110 08:55:06.515894  337651 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I0110 08:55:06.523865  337651 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/7183.pem
	I0110 08:55:06.531389  337651 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/7183.pem /etc/ssl/certs/7183.pem
	I0110 08:55:06.538646  337651 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7183.pem
	I0110 08:55:06.542199  337651 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jan 10 08:23 /usr/share/ca-certificates/7183.pem
	I0110 08:55:06.542240  337651 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7183.pem
	I0110 08:55:06.577649  337651 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I0110 08:55:06.585536  337651 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0110 08:55:06.589317  337651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0110 08:55:06.625993  337651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0110 08:55:06.660607  337651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0110 08:55:06.701294  337651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0110 08:55:06.750337  337651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0110 08:55:06.795920  337651 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0110 08:55:06.844782  337651 kubeadm.go:401] StartCluster: {Name:newest-cni-582650 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:newest-cni-582650 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:55:06.844904  337651 cri.go:61] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0110 08:55:06.844978  337651 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0110 08:55:06.876617  337651 cri.go:96] found id: "7365649bf838b1d9b8c45dcaa7ce29160f5dd8674c9802b9f0610e314ca173cc"
	I0110 08:55:06.876642  337651 cri.go:96] found id: "ce0d065d2705be147f0cd136ee494369b9a709e0327cb0d06b594a233ab11c96"
	I0110 08:55:06.876646  337651 cri.go:96] found id: "04153603f19d1830f6cad025b9d59e70752a925ea51b14474fc99161af31a6c1"
	I0110 08:55:06.876650  337651 cri.go:96] found id: "90c58c3cd9924aced7da1338afb4ee8fd756be611e2c9aed303c38a538dfbacc"
	I0110 08:55:06.876653  337651 cri.go:96] found id: ""
	I0110 08:55:06.876706  337651 ssh_runner.go:195] Run: sudo runc list -f json
	W0110 08:55:06.889419  337651 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:55:06Z" level=error msg="open /run/runc: no such file or directory"
	I0110 08:55:06.889478  337651 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0110 08:55:06.897471  337651 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I0110 08:55:06.897491  337651 kubeadm.go:598] restartPrimaryControlPlane start ...
	I0110 08:55:06.897550  337651 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0110 08:55:06.905056  337651 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0110 08:55:06.905848  337651 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-582650" does not appear in /home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:55:06.906229  337651 kubeconfig.go:62] /home/jenkins/minikube-integration/22427-3641/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-582650" cluster setting kubeconfig missing "newest-cni-582650" context setting]
	I0110 08:55:06.906722  337651 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/kubeconfig: {Name:mk0d29b3b0ee1fd71729aff31f901be50a2d0664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:55:06.907996  337651 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0110 08:55:06.916223  337651 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I0110 08:55:06.916252  337651 kubeadm.go:602] duration metric: took 18.746858ms to restartPrimaryControlPlane
	I0110 08:55:06.916267  337651 kubeadm.go:403] duration metric: took 71.493899ms to StartCluster
	I0110 08:55:06.916288  337651 settings.go:142] acquiring lock: {Name:mkbb32fc6441ceb31ce2923ea8999f8375298f08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:55:06.916352  337651 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:55:06.917032  337651 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22427-3641/kubeconfig: {Name:mk0d29b3b0ee1fd71729aff31f901be50a2d0664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0110 08:55:06.917252  337651 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0110 08:55:06.917332  337651 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0110 08:55:06.917423  337651 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-582650"
	I0110 08:55:06.917441  337651 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-582650"
	W0110 08:55:06.917448  337651 addons.go:248] addon storage-provisioner should already be in state true
	I0110 08:55:06.917456  337651 addons.go:70] Setting dashboard=true in profile "newest-cni-582650"
	I0110 08:55:06.917486  337651 addons.go:239] Setting addon dashboard=true in "newest-cni-582650"
	W0110 08:55:06.917500  337651 addons.go:248] addon dashboard should already be in state true
	I0110 08:55:06.917498  337651 config.go:182] Loaded profile config "newest-cni-582650": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:55:06.917505  337651 addons.go:70] Setting default-storageclass=true in profile "newest-cni-582650"
	I0110 08:55:06.917531  337651 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-582650"
	I0110 08:55:06.917545  337651 host.go:66] Checking if "newest-cni-582650" exists ...
	I0110 08:55:06.917487  337651 host.go:66] Checking if "newest-cni-582650" exists ...
	I0110 08:55:06.917888  337651 cli_runner.go:164] Run: docker container inspect newest-cni-582650 --format={{.State.Status}}
	I0110 08:55:06.918065  337651 cli_runner.go:164] Run: docker container inspect newest-cni-582650 --format={{.State.Status}}
	I0110 08:55:06.918090  337651 cli_runner.go:164] Run: docker container inspect newest-cni-582650 --format={{.State.Status}}
	I0110 08:55:06.922752  337651 out.go:179] * Verifying Kubernetes components...
	I0110 08:55:06.924557  337651 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0110 08:55:06.944895  337651 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0110 08:55:06.944980  337651 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0110 08:55:06.946125  337651 addons.go:239] Setting addon default-storageclass=true in "newest-cni-582650"
	W0110 08:55:06.946159  337651 addons.go:248] addon default-storageclass should already be in state true
	I0110 08:55:06.946192  337651 host.go:66] Checking if "newest-cni-582650" exists ...
	I0110 08:55:06.946653  337651 cli_runner.go:164] Run: docker container inspect newest-cni-582650 --format={{.State.Status}}
	I0110 08:55:06.946876  337651 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 08:55:06.946895  337651 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0110 08:55:06.946956  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:06.947943  337651 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0110 08:55:06.949187  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0110 08:55:06.949212  337651 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0110 08:55:06.949272  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:06.979786  337651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/newest-cni-582650/id_rsa Username:docker}
	I0110 08:55:06.982713  337651 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I0110 08:55:06.982757  337651 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0110 08:55:06.982820  337651 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-582650
	I0110 08:55:06.986338  337651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/newest-cni-582650/id_rsa Username:docker}
	I0110 08:55:07.009531  337651 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/newest-cni-582650/id_rsa Username:docker}
	I0110 08:55:07.065163  337651 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0110 08:55:07.081832  337651 api_server.go:52] waiting for apiserver process to appear ...
	I0110 08:55:07.081898  337651 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 08:55:07.095536  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0110 08:55:07.095562  337651 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0110 08:55:07.096364  337651 api_server.go:72] duration metric: took 179.085582ms to wait for apiserver process to appear ...
	I0110 08:55:07.096384  337651 api_server.go:88] waiting for apiserver healthz status ...
	I0110 08:55:07.096403  337651 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0110 08:55:07.100030  337651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0110 08:55:07.111493  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0110 08:55:07.111519  337651 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0110 08:55:07.122472  337651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0110 08:55:07.128466  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0110 08:55:07.128484  337651 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0110 08:55:07.144597  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0110 08:55:07.144620  337651 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0110 08:55:07.160177  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0110 08:55:07.160236  337651 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0110 08:55:07.177064  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0110 08:55:07.177088  337651 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0110 08:55:07.192696  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0110 08:55:07.192723  337651 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0110 08:55:07.207042  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0110 08:55:07.207063  337651 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0110 08:55:07.219547  337651 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 08:55:07.219572  337651 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0110 08:55:07.232446  337651 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0110 08:55:08.397883  337651 api_server.go:325] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0110 08:55:08.397912  337651 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0110 08:55:08.397934  337651 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0110 08:55:08.408043  337651 api_server.go:325] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0110 08:55:08.408134  337651 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0110 08:55:08.597191  337651 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0110 08:55:08.602170  337651 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 08:55:08.602223  337651 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0110 08:55:08.914012  337651 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.813953016s)
	I0110 08:55:08.914115  337651 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.791608298s)
	I0110 08:55:08.914183  337651 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.681703498s)
	I0110 08:55:08.916001  337651 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-582650 addons enable metrics-server
	
	I0110 08:55:08.924789  337651 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I0110 08:55:08.926065  337651 addons.go:530] duration metric: took 2.008739629s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I0110 08:55:09.096869  337651 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0110 08:55:09.101576  337651 api_server.go:325] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0110 08:55:09.101606  337651 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0110 08:55:09.597108  337651 api_server.go:299] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0110 08:55:09.606628  337651 api_server.go:325] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0110 08:55:09.608612  337651 api_server.go:141] control plane version: v1.35.0
	I0110 08:55:09.608678  337651 api_server.go:131] duration metric: took 2.512285395s to wait for apiserver health ...
	I0110 08:55:09.608701  337651 system_pods.go:43] waiting for kube-system pods to appear ...
	I0110 08:55:09.612542  337651 system_pods.go:59] 8 kube-system pods found
	I0110 08:55:09.612572  337651 system_pods.go:61] "coredns-7d764666f9-bmscc" [bc0ad55b-bbf6-4898-a38a-7a1a2d154cb3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0110 08:55:09.612580  337651 system_pods.go:61] "etcd-newest-cni-582650" [bb439312-4d17-46e1-9d07-4b972ad2299b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0110 08:55:09.612587  337651 system_pods.go:61] "kindnet-gp4sj" [c1167720-98b8-4850-a264-11964eb2675d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0110 08:55:09.612599  337651 system_pods.go:61] "kube-apiserver-newest-cni-582650" [947302b1-615d-4f31-976c-039fcf37be97] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0110 08:55:09.612607  337651 system_pods.go:61] "kube-controller-manager-newest-cni-582650" [c2156827-ae41-4c25-958a-ea329f7adf65] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0110 08:55:09.612614  337651 system_pods.go:61] "kube-proxy-ldmfv" [02b5ffbb-b52f-4339-bee2-b9400a4714bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0110 08:55:09.612621  337651 system_pods.go:61] "kube-scheduler-newest-cni-582650" [8d788728-c388-42a6-9bcd-9ab2bf3468fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0110 08:55:09.612634  337651 system_pods.go:61] "storage-provisioner" [349ec60d-a776-479e-b9a0-892989e886eb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0110 08:55:09.612643  337651 system_pods.go:74] duration metric: took 3.926901ms to wait for pod list to return data ...
	I0110 08:55:09.612653  337651 default_sa.go:34] waiting for default service account to be created ...
	I0110 08:55:09.615183  337651 default_sa.go:45] found service account: "default"
	I0110 08:55:09.615208  337651 default_sa.go:55] duration metric: took 2.548851ms for default service account to be created ...
	I0110 08:55:09.615222  337651 kubeadm.go:587] duration metric: took 2.697945894s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0110 08:55:09.615245  337651 node_conditions.go:102] verifying NodePressure condition ...
	I0110 08:55:09.617802  337651 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0110 08:55:09.617836  337651 node_conditions.go:123] node cpu capacity is 8
	I0110 08:55:09.617855  337651 node_conditions.go:105] duration metric: took 2.604361ms to run NodePressure ...
	I0110 08:55:09.617875  337651 start.go:242] waiting for startup goroutines ...
	I0110 08:55:09.617884  337651 start.go:247] waiting for cluster config update ...
	I0110 08:55:09.617898  337651 start.go:256] writing updated cluster config ...
	I0110 08:55:09.618148  337651 ssh_runner.go:195] Run: rm -f paused
	I0110 08:55:09.667016  337651 start.go:625] kubectl: 1.35.0, cluster: 1.35.0 (minor skew: 0)
	I0110 08:55:09.669727  337651 out.go:179] * Done! kubectl is now configured to use "newest-cni-582650" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.552415739Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.555460668Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=c1e728ab-f65c-41a0-9650-795e33d34d25 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.556203109Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=6f2acd88-9e09-4739-a3fb-23bbeaaff32b name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.557226411Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.55790479Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.55817598Z" level=info msg="Ran pod sandbox 4facb9f39df6cbf1ac474fa6b2617a3f5e124e7806d5d95dd2162682a8865d5f with infra container: kube-system/kindnet-gp4sj/POD" id=c1e728ab-f65c-41a0-9650-795e33d34d25 name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.558717185Z" level=info msg="Ran pod sandbox ae8d846f012dd405959c0496b349ab8b5912462410f06e662eb5e6370d977a06 with infra container: kube-system/kube-proxy-ldmfv/POD" id=6f2acd88-9e09-4739-a3fb-23bbeaaff32b name=/runtime.v1.RuntimeService/RunPodSandbox
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.55935734Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=d5bee083-834c-44bb-9afb-424b4283f781 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.560522772Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=707024a3-eefd-4222-94c0-d7194a475dd9 name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.560574907Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=9502548d-e6e3-4aca-ade9-90ccce39506f name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.561785578Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0" id=2cc6caf3-a7f6-4a69-ad3c-b6dcc185e30a name=/runtime.v1.ImageService/ImageStatus
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.562071304Z" level=info msg="Creating container: kube-system/kindnet-gp4sj/kindnet-cni" id=6edb84b5-f143-4222-9ff0-c0d390f090ec name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.562176185Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.562643784Z" level=info msg="Creating container: kube-system/kube-proxy-ldmfv/kube-proxy" id=6a98df64-51cf-4416-8cb8-d41306615ea0 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.562837401Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.567005645Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.567613768Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.56982781Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.570496388Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.610553439Z" level=info msg="Created container fe72c29c3cb566af837dc49a7fa9ff51841e0d239b1c0e7672cef1d14b2e2e1e: kube-system/kindnet-gp4sj/kindnet-cni" id=6edb84b5-f143-4222-9ff0-c0d390f090ec name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.611485729Z" level=info msg="Starting container: fe72c29c3cb566af837dc49a7fa9ff51841e0d239b1c0e7672cef1d14b2e2e1e" id=3fa7cc34-d3aa-4f55-9fb4-00b1c8754a4a name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.613635156Z" level=info msg="Started container" PID=1052 containerID=fe72c29c3cb566af837dc49a7fa9ff51841e0d239b1c0e7672cef1d14b2e2e1e description=kube-system/kindnet-gp4sj/kindnet-cni id=3fa7cc34-d3aa-4f55-9fb4-00b1c8754a4a name=/runtime.v1.RuntimeService/StartContainer sandboxID=4facb9f39df6cbf1ac474fa6b2617a3f5e124e7806d5d95dd2162682a8865d5f
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.613792641Z" level=info msg="Created container dafc458b0b0ed2e517d82757911d24c31a8b3d269aea3f699c10d94d701dffe2: kube-system/kube-proxy-ldmfv/kube-proxy" id=6a98df64-51cf-4416-8cb8-d41306615ea0 name=/runtime.v1.RuntimeService/CreateContainer
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.614448142Z" level=info msg="Starting container: dafc458b0b0ed2e517d82757911d24c31a8b3d269aea3f699c10d94d701dffe2" id=b5a071e6-a80b-40d6-8777-1ad8844b00cf name=/runtime.v1.RuntimeService/StartContainer
	Jan 10 08:55:09 newest-cni-582650 crio[519]: time="2026-01-10T08:55:09.617356025Z" level=info msg="Started container" PID=1053 containerID=dafc458b0b0ed2e517d82757911d24c31a8b3d269aea3f699c10d94d701dffe2 description=kube-system/kube-proxy-ldmfv/kube-proxy id=b5a071e6-a80b-40d6-8777-1ad8844b00cf name=/runtime.v1.RuntimeService/StartContainer sandboxID=ae8d846f012dd405959c0496b349ab8b5912462410f06e662eb5e6370d977a06
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	dafc458b0b0ed       32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8   5 seconds ago       Running             kube-proxy                1                   ae8d846f012dd       kube-proxy-ldmfv                            kube-system
	fe72c29c3cb56       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251   5 seconds ago       Running             kindnet-cni               1                   4facb9f39df6c       kindnet-gp4sj                               kube-system
	7365649bf838b       550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc   8 seconds ago       Running             kube-scheduler            1                   4e5e08379238e       kube-scheduler-newest-cni-582650            kube-system
	ce0d065d2705b       2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508   8 seconds ago       Running             kube-controller-manager   1                   866357ecdff7e       kube-controller-manager-newest-cni-582650   kube-system
	04153603f19d1       5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499   8 seconds ago       Running             kube-apiserver            1                   8a540ff2b8b09       kube-apiserver-newest-cni-582650            kube-system
	90c58c3cd9924       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2   8 seconds ago       Running             etcd                      1                   6cfd045b0894b       etcd-newest-cni-582650                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-582650
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-582650
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=70a45a31ac61db388909158c013d81580d01a5ee
	                    minikube.k8s.io/name=newest-cni-582650
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2026_01_10T08_54_50_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 10 Jan 2026 08:54:46 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-582650
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 10 Jan 2026 08:55:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 10 Jan 2026 08:55:08 +0000   Sat, 10 Jan 2026 08:54:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 10 Jan 2026 08:55:08 +0000   Sat, 10 Jan 2026 08:54:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 10 Jan 2026 08:55:08 +0000   Sat, 10 Jan 2026 08:54:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 10 Jan 2026 08:55:08 +0000   Sat, 10 Jan 2026 08:54:46 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-582650
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 73d0d7ed7bc783a051b563196960b035
	  System UUID:                31447831-8276-4e9c-bb29-38ef2ce553ce
	  Boot ID:                    7c12f492-59b6-440b-986c-741ece916a23
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.35.0
	  Kubelet Version:            v1.35.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-582650                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         25s
	  kube-system                 kindnet-gp4sj                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      20s
	  kube-system                 kube-apiserver-newest-cni-582650             250m (3%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-controller-manager-newest-cni-582650    200m (2%)     0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-proxy-ldmfv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         20s
	  kube-system                 kube-scheduler-newest-cni-582650             100m (1%)     0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  21s   node-controller  Node newest-cni-582650 event: Registered Node newest-cni-582650 in Controller
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-582650 event: Registered Node newest-cni-582650 in Controller
	
	
	==> dmesg <==
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 4a 03 46 88 66 4e 08 06
	[  +6.406566] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 7a 4f c9 a1 45 f4 08 06
	[  +0.001429] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ca 5b 5b a9 7c eb 08 06
	[ +24.002020] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 7a 6f 49 7e 9d d1 08 06
	[  +0.000366] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 3e 3a ac 18 98 c9 08 06
	[Jan10 08:52] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 4a 05 5c 86 cf df 08 06
	[  +0.000337] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff 7a 4f c9 a1 45 f4 08 06
	[  +0.000602] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff ca 5b 5b a9 7c eb 08 06
	[  +0.739059] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 6e 32 81 dc 0a 7c 08 06
	[  +1.988700] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff b6 6e ec e4 aa 3d 08 06
	[ +20.040962] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 82 ac f8 cf fb 84 08 06
	[  +0.000375] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff b6 6e ec e4 aa 3d 08 06
	[  +0.000915] IPv4: martian source 10.244.0.4 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 32 81 dc 0a 7c 08 06
	
	
	==> etcd [90c58c3cd9924aced7da1338afb4ee8fd756be611e2c9aed303c38a538dfbacc] <==
	{"level":"info","ts":"2026-01-10T08:55:06.839476Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2026-01-10T08:55:06.839536Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2026-01-10T08:55:06.839375Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2026-01-10T08:55:06.839640Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2026-01-10T08:55:06.839716Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2026-01-10T08:55:06.839713Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2026-01-10T08:55:06.839768Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2026-01-10T08:55:07.528377Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2026-01-10T08:55:07.528426Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2026-01-10T08:55:07.528471Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2026-01-10T08:55:07.528485Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T08:55:07.528505Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2026-01-10T08:55:07.529370Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2026-01-10T08:55:07.529453Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"dfc97eb0aae75b33 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2026-01-10T08:55:07.529492Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2026-01-10T08:55:07.529507Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2026-01-10T08:55:07.530344Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T08:55:07.530346Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:newest-cni-582650 ClientURLs:[https://192.168.94.2:2379]}","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2026-01-10T08:55:07.530370Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2026-01-10T08:55:07.530579Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2026-01-10T08:55:07.530607Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2026-01-10T08:55:07.532049Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T08:55:07.532284Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2026-01-10T08:55:07.534613Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2026-01-10T08:55:07.534675Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	
	
	==> kernel <==
	 08:55:15 up 37 min,  0 user,  load average: 4.87, 4.29, 2.81
	Linux newest-cni-582650 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [fe72c29c3cb566af837dc49a7fa9ff51841e0d239b1c0e7672cef1d14b2e2e1e] <==
	I0110 08:55:09.882201       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0110 08:55:09.882479       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I0110 08:55:09.882593       1 main.go:148] setting mtu 1500 for CNI 
	I0110 08:55:09.882613       1 main.go:178] kindnetd IP family: "ipv4"
	I0110 08:55:09.882644       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2026-01-10T08:55:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0110 08:55:10.081377       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0110 08:55:10.081427       1 controller.go:381] "Waiting for informer caches to sync"
	I0110 08:55:10.081440       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0110 08:55:10.081591       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0110 08:55:10.481600       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0110 08:55:10.481630       1 metrics.go:72] Registering metrics
	I0110 08:55:10.481708       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [04153603f19d1830f6cad025b9d59e70752a925ea51b14474fc99161af31a6c1] <==
	I0110 08:55:08.482266       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0110 08:55:08.482155       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0110 08:55:08.482526       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I0110 08:55:08.482558       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0110 08:55:08.482702       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0110 08:55:08.482674       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:08.482661       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I0110 08:55:08.488672       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E0110 08:55:08.488851       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0110 08:55:08.489592       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0110 08:55:08.497484       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:08.497505       1 policy_source.go:248] refreshing policies
	I0110 08:55:08.516744       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0110 08:55:08.727081       1 controller.go:667] quota admission added evaluator for: namespaces
	I0110 08:55:08.757857       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0110 08:55:08.775581       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0110 08:55:08.781932       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0110 08:55:08.791029       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0110 08:55:08.820133       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.135.235"}
	I0110 08:55:08.829806       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.134.177"}
	I0110 08:55:09.385656       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I0110 08:55:12.023535       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 08:55:12.023591       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0110 08:55:12.173018       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0110 08:55:12.223217       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [ce0d065d2705be147f0cd136ee494369b9a709e0327cb0d06b594a233ab11c96] <==
	I0110 08:55:11.629159       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.629159       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.629850       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.630071       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.630124       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.630163       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.630567       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.631328       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.631595       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.631646       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.632464       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.632535       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.632628       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:55:11.633510       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.633801       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.633822       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.633807       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.634057       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.634060       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.634104       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.642103       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.732800       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.733912       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:11.733934       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I0110 08:55:11.733940       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [dafc458b0b0ed2e517d82757911d24c31a8b3d269aea3f699c10d94d701dffe2] <==
	I0110 08:55:09.655828       1 server_linux.go:53] "Using iptables proxy"
	I0110 08:55:09.716118       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:55:09.816640       1 shared_informer.go:377] "Caches are synced"
	I0110 08:55:09.816687       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E0110 08:55:09.816804       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0110 08:55:09.837888       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0110 08:55:09.837945       1 server_linux.go:136] "Using iptables Proxier"
	I0110 08:55:09.843277       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0110 08:55:09.843688       1 server.go:529] "Version info" version="v1.35.0"
	I0110 08:55:09.843712       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 08:55:09.845508       1 config.go:106] "Starting endpoint slice config controller"
	I0110 08:55:09.845533       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0110 08:55:09.845561       1 config.go:200] "Starting service config controller"
	I0110 08:55:09.845566       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0110 08:55:09.845618       1 config.go:309] "Starting node config controller"
	I0110 08:55:09.845630       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0110 08:55:09.845638       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0110 08:55:09.845750       1 config.go:403] "Starting serviceCIDR config controller"
	I0110 08:55:09.845789       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0110 08:55:09.945787       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0110 08:55:09.945798       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0110 08:55:09.946087       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [7365649bf838b1d9b8c45dcaa7ce29160f5dd8674c9802b9f0610e314ca173cc] <==
	I0110 08:55:07.255033       1 serving.go:386] Generated self-signed cert in-memory
	W0110 08:55:08.396084       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0110 08:55:08.396120       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0110 08:55:08.396132       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0110 08:55:08.396141       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0110 08:55:08.436595       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0"
	I0110 08:55:08.436627       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0110 08:55:08.438593       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0110 08:55:08.438639       1 shared_informer.go:370] "Waiting for caches to sync"
	I0110 08:55:08.439127       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0110 08:55:08.439742       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0110 08:55:08.539712       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Jan 10 08:55:08 newest-cni-582650 kubelet[670]: I0110 08:55:08.559866     670 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-582650"
	Jan 10 08:55:08 newest-cni-582650 kubelet[670]: E0110 08:55:08.565666     670 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-582650\" already exists" pod="kube-system/kube-scheduler-newest-cni-582650"
	Jan 10 08:55:08 newest-cni-582650 kubelet[670]: I0110 08:55:08.565707     670 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-582650"
	Jan 10 08:55:08 newest-cni-582650 kubelet[670]: E0110 08:55:08.571783     670 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-582650\" already exists" pod="kube-system/etcd-newest-cni-582650"
	Jan 10 08:55:08 newest-cni-582650 kubelet[670]: I0110 08:55:08.571821     670 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-582650"
	Jan 10 08:55:08 newest-cni-582650 kubelet[670]: E0110 08:55:08.577234     670 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-582650\" already exists" pod="kube-system/kube-apiserver-newest-cni-582650"
	Jan 10 08:55:09 newest-cni-582650 kubelet[670]: I0110 08:55:09.241377     670 apiserver.go:52] "Watching apiserver"
	Jan 10 08:55:09 newest-cni-582650 kubelet[670]: E0110 08:55:09.246118     670 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-582650" containerName="kube-controller-manager"
	Jan 10 08:55:09 newest-cni-582650 kubelet[670]: I0110 08:55:09.247661     670 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Jan 10 08:55:09 newest-cni-582650 kubelet[670]: E0110 08:55:09.289778     670 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-582650" containerName="etcd"
	Jan 10 08:55:09 newest-cni-582650 kubelet[670]: I0110 08:55:09.290937     670 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-582650"
	Jan 10 08:55:09 newest-cni-582650 kubelet[670]: E0110 08:55:09.291257     670 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-582650" containerName="kube-apiserver"
	Jan 10 08:55:09 newest-cni-582650 kubelet[670]: E0110 08:55:09.296501     670 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-582650\" already exists" pod="kube-system/kube-scheduler-newest-cni-582650"
	Jan 10 08:55:09 newest-cni-582650 kubelet[670]: E0110 08:55:09.296585     670 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-582650" containerName="kube-scheduler"
	Jan 10 08:55:09 newest-cni-582650 kubelet[670]: I0110 08:55:09.320910     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c1167720-98b8-4850-a264-11964eb2675d-cni-cfg\") pod \"kindnet-gp4sj\" (UID: \"c1167720-98b8-4850-a264-11964eb2675d\") " pod="kube-system/kindnet-gp4sj"
	Jan 10 08:55:09 newest-cni-582650 kubelet[670]: I0110 08:55:09.320976     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1167720-98b8-4850-a264-11964eb2675d-lib-modules\") pod \"kindnet-gp4sj\" (UID: \"c1167720-98b8-4850-a264-11964eb2675d\") " pod="kube-system/kindnet-gp4sj"
	Jan 10 08:55:09 newest-cni-582650 kubelet[670]: I0110 08:55:09.321042     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/02b5ffbb-b52f-4339-bee2-b9400a4714bd-xtables-lock\") pod \"kube-proxy-ldmfv\" (UID: \"02b5ffbb-b52f-4339-bee2-b9400a4714bd\") " pod="kube-system/kube-proxy-ldmfv"
	Jan 10 08:55:09 newest-cni-582650 kubelet[670]: I0110 08:55:09.321065     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/02b5ffbb-b52f-4339-bee2-b9400a4714bd-lib-modules\") pod \"kube-proxy-ldmfv\" (UID: \"02b5ffbb-b52f-4339-bee2-b9400a4714bd\") " pod="kube-system/kube-proxy-ldmfv"
	Jan 10 08:55:09 newest-cni-582650 kubelet[670]: I0110 08:55:09.321105     670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1167720-98b8-4850-a264-11964eb2675d-xtables-lock\") pod \"kindnet-gp4sj\" (UID: \"c1167720-98b8-4850-a264-11964eb2675d\") " pod="kube-system/kindnet-gp4sj"
	Jan 10 08:55:10 newest-cni-582650 kubelet[670]: E0110 08:55:10.289023     670 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-582650" containerName="etcd"
	Jan 10 08:55:10 newest-cni-582650 kubelet[670]: E0110 08:55:10.289279     670 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-582650" containerName="kube-scheduler"
	Jan 10 08:55:10 newest-cni-582650 kubelet[670]: I0110 08:55:10.682935     670 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Jan 10 08:55:10 newest-cni-582650 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Jan 10 08:55:10 newest-cni-582650 systemd[1]: kubelet.service: Deactivated successfully.
	Jan 10 08:55:10 newest-cni-582650 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-582650 -n newest-cni-582650
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-582650 -n newest-cni-582650: exit status 2 (348.841674ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-582650 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-bmscc storage-provisioner dashboard-metrics-scraper-867fb5f87b-b99c5 kubernetes-dashboard-b84665fb8-hnzwl
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-582650 describe pod coredns-7d764666f9-bmscc storage-provisioner dashboard-metrics-scraper-867fb5f87b-b99c5 kubernetes-dashboard-b84665fb8-hnzwl
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-582650 describe pod coredns-7d764666f9-bmscc storage-provisioner dashboard-metrics-scraper-867fb5f87b-b99c5 kubernetes-dashboard-b84665fb8-hnzwl: exit status 1 (65.410569ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-bmscc" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-b99c5" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-hnzwl" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-582650 describe pod coredns-7d764666f9-bmscc storage-provisioner dashboard-metrics-scraper-867fb5f87b-b99c5 kubernetes-dashboard-b84665fb8-hnzwl: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.43s)

                                                
                                    

Test pass (279/332)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.85
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.35.0/json-events 2.71
13 TestDownloadOnly/v1.35.0/preload-exists 0
17 TestDownloadOnly/v1.35.0/LogsDuration 0.07
18 TestDownloadOnly/v1.35.0/DeleteAll 0.22
19 TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 0.4
21 TestBinaryMirror 0.8
22 TestOffline 61.76
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 90.17
31 TestAddons/serial/GCPAuth/Namespaces 0.13
32 TestAddons/serial/GCPAuth/FakeCredentials 7.4
48 TestAddons/StoppedEnableDisable 16.67
49 TestCertOptions 27.77
50 TestCertExpiration 211.37
52 TestForceSystemdFlag 22.28
53 TestForceSystemdEnv 22.44
58 TestErrorSpam/setup 18.76
59 TestErrorSpam/start 0.65
60 TestErrorSpam/status 0.94
61 TestErrorSpam/pause 5.64
62 TestErrorSpam/unpause 5.19
63 TestErrorSpam/stop 12.57
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 35.12
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.08
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.44
75 TestFunctional/serial/CacheCmd/cache/add_local 0.87
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.46
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 61.57
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.17
86 TestFunctional/serial/LogsFileCmd 1.21
87 TestFunctional/serial/InvalidService 3.94
89 TestFunctional/parallel/ConfigCmd 0.45
90 TestFunctional/parallel/DashboardCmd 5.93
91 TestFunctional/parallel/DryRun 0.78
92 TestFunctional/parallel/InternationalLanguage 0.2
93 TestFunctional/parallel/StatusCmd 1.14
97 TestFunctional/parallel/ServiceCmdConnect 7.74
98 TestFunctional/parallel/AddonsCmd 0.19
99 TestFunctional/parallel/PersistentVolumeClaim 22.72
101 TestFunctional/parallel/SSHCmd 0.52
102 TestFunctional/parallel/CpCmd 2.2
103 TestFunctional/parallel/MySQL 21.95
104 TestFunctional/parallel/FileSync 0.27
105 TestFunctional/parallel/CertSync 1.78
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.58
113 TestFunctional/parallel/License 0.27
114 TestFunctional/parallel/ServiceCmd/DeployApp 7.15
115 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
116 TestFunctional/parallel/ProfileCmd/profile_list 0.46
117 TestFunctional/parallel/MountCmd/any-port 14.33
118 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
119 TestFunctional/parallel/ServiceCmd/List 0.66
120 TestFunctional/parallel/ServiceCmd/JSONOutput 0.59
121 TestFunctional/parallel/ServiceCmd/HTTPS 0.46
122 TestFunctional/parallel/ServiceCmd/Format 0.45
123 TestFunctional/parallel/ServiceCmd/URL 0.52
124 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
125 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
126 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
127 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
128 TestFunctional/parallel/ImageCommands/ImageBuild 2.57
129 TestFunctional/parallel/ImageCommands/Setup 0.37
130 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.18
131 TestFunctional/parallel/Version/short 0.06
132 TestFunctional/parallel/Version/components 0.54
133 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.94
134 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.13
135 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.44
136 TestFunctional/parallel/ImageCommands/ImageRemove 2.17
137 TestFunctional/parallel/MountCmd/specific-port 2
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.93
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.4
141 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.53
142 TestFunctional/parallel/MountCmd/VerifyCleanup 1.63
143 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
145 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.21
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
149 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
150 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
154 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 89.29
163 TestMultiControlPlane/serial/DeployApp 4.23
164 TestMultiControlPlane/serial/PingHostFromPods 1.01
165 TestMultiControlPlane/serial/AddWorkerNode 26.48
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.89
168 TestMultiControlPlane/serial/CopyFile 16.53
169 TestMultiControlPlane/serial/StopSecondaryNode 18.77
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.71
171 TestMultiControlPlane/serial/RestartSecondaryNode 8.61
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.88
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 117.6
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.53
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.69
176 TestMultiControlPlane/serial/StopCluster 46.63
177 TestMultiControlPlane/serial/RestartCluster 53.24
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.69
179 TestMultiControlPlane/serial/AddSecondaryNode 44.02
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.9
185 TestJSONOutput/start/Command 34.89
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 7.98
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.22
210 TestKicCustomNetwork/create_custom_network 26.14
211 TestKicCustomNetwork/use_default_bridge_network 22.04
212 TestKicExistingNetwork 23.3
213 TestKicCustomSubnet 23.7
214 TestKicStaticIP 23.11
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 41.96
219 TestMountStart/serial/StartWithMountFirst 7.65
220 TestMountStart/serial/VerifyMountFirst 0.26
221 TestMountStart/serial/StartWithMountSecond 4.94
222 TestMountStart/serial/VerifyMountSecond 0.26
223 TestMountStart/serial/DeleteFirst 1.66
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.24
226 TestMountStart/serial/RestartStopped 7.09
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 58.97
231 TestMultiNode/serial/DeployApp2Nodes 3.6
232 TestMultiNode/serial/PingHostFrom2Pods 0.69
233 TestMultiNode/serial/AddNode 22.95
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.64
236 TestMultiNode/serial/CopyFile 9.44
237 TestMultiNode/serial/StopNode 2.24
238 TestMultiNode/serial/StartAfterStop 7.06
239 TestMultiNode/serial/RestartKeepsNodes 56.46
240 TestMultiNode/serial/DeleteNode 4.96
241 TestMultiNode/serial/StopMultiNode 19.44
242 TestMultiNode/serial/RestartMultiNode 46.89
243 TestMultiNode/serial/ValidateNameConflict 21.89
250 TestScheduledStopUnix 96.17
253 TestInsufficientStorage 11.59
254 TestRunningBinaryUpgrade 291.85
256 TestKubernetesUpgrade 158.73
257 TestMissingContainerUpgrade 92.27
259 TestPause/serial/Start 59.66
260 TestStoppedBinaryUpgrade/Setup 0.54
261 TestStoppedBinaryUpgrade/Upgrade 304.34
262 TestPause/serial/SecondStartNoReconfiguration 6.25
265 TestNoKubernetes/serial/StartNoK8sWithVersion 0.07
266 TestNoKubernetes/serial/StartWithK8s 21.81
274 TestNetworkPlugins/group/false 4.98
278 TestNoKubernetes/serial/StartWithStopK8s 9.96
279 TestNoKubernetes/serial/Start 4.51
280 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
281 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
282 TestNoKubernetes/serial/ProfileList 34.35
283 TestNoKubernetes/serial/Stop 1.26
284 TestNoKubernetes/serial/StartNoArgs 6.31
285 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
293 TestPreload/Start-NoPreload-PullImage 49.63
294 TestPreload/Restart-With-Preload-Check-User-Image 43.59
295 TestStoppedBinaryUpgrade/MinikubeLogs 0.98
297 TestNetworkPlugins/group/auto/Start 40.77
298 TestNetworkPlugins/group/kindnet/Start 41.75
299 TestNetworkPlugins/group/calico/Start 48.91
300 TestNetworkPlugins/group/auto/KubeletFlags 0.37
301 TestNetworkPlugins/group/auto/NetCatPod 8.24
302 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
303 TestNetworkPlugins/group/auto/DNS 0.1
304 TestNetworkPlugins/group/auto/Localhost 0.08
305 TestNetworkPlugins/group/auto/HairPin 0.09
306 TestNetworkPlugins/group/kindnet/KubeletFlags 0.38
307 TestNetworkPlugins/group/kindnet/NetCatPod 9.28
308 TestNetworkPlugins/group/kindnet/DNS 0.17
309 TestNetworkPlugins/group/kindnet/Localhost 0.12
310 TestNetworkPlugins/group/kindnet/HairPin 0.12
311 TestNetworkPlugins/group/calico/ControllerPod 6.01
312 TestNetworkPlugins/group/custom-flannel/Start 46.14
313 TestNetworkPlugins/group/calico/KubeletFlags 0.36
314 TestNetworkPlugins/group/calico/NetCatPod 9.22
315 TestNetworkPlugins/group/enable-default-cni/Start 61.69
316 TestNetworkPlugins/group/calico/DNS 0.12
317 TestNetworkPlugins/group/calico/Localhost 0.09
318 TestNetworkPlugins/group/calico/HairPin 0.09
319 TestNetworkPlugins/group/bridge/Start 67.19
320 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
321 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.23
322 TestNetworkPlugins/group/custom-flannel/DNS 0.12
323 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
324 TestNetworkPlugins/group/custom-flannel/HairPin 0.09
325 TestNetworkPlugins/group/flannel/Start 47.24
326 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.31
327 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.2
328 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
329 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
330 TestNetworkPlugins/group/enable-default-cni/HairPin 0.09
332 TestStartStop/group/old-k8s-version/serial/FirstStart 53.22
333 TestNetworkPlugins/group/bridge/KubeletFlags 0.41
334 TestNetworkPlugins/group/bridge/NetCatPod 12.27
336 TestStartStop/group/no-preload/serial/FirstStart 47.69
337 TestNetworkPlugins/group/bridge/DNS 0.12
338 TestNetworkPlugins/group/bridge/Localhost 0.09
339 TestNetworkPlugins/group/bridge/HairPin 0.09
340 TestNetworkPlugins/group/flannel/ControllerPod 6.01
341 TestNetworkPlugins/group/flannel/KubeletFlags 0.35
342 TestNetworkPlugins/group/flannel/NetCatPod 9.23
344 TestStartStop/group/embed-certs/serial/FirstStart 42.67
345 TestNetworkPlugins/group/flannel/DNS 0.18
346 TestNetworkPlugins/group/flannel/Localhost 0.14
347 TestNetworkPlugins/group/flannel/HairPin 0.18
348 TestStartStop/group/old-k8s-version/serial/DeployApp 8.41
349 TestStartStop/group/no-preload/serial/DeployApp 8.27
351 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 40.43
354 TestStartStop/group/old-k8s-version/serial/Stop 16.1
355 TestStartStop/group/no-preload/serial/Stop 16.26
356 TestStartStop/group/embed-certs/serial/DeployApp 7.22
357 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
358 TestStartStop/group/old-k8s-version/serial/SecondStart 44.88
359 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
360 TestStartStop/group/no-preload/serial/SecondStart 49.85
362 TestStartStop/group/embed-certs/serial/Stop 18.94
363 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.31
364 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.27
365 TestStartStop/group/embed-certs/serial/SecondStart 43.63
367 TestStartStop/group/default-k8s-diff-port/serial/Stop 18.58
368 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
369 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
370 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 47.23
371 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
372 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
373 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
375 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
376 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
379 TestStartStop/group/newest-cni/serial/FirstStart 26.13
380 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
381 TestPreload/PreloadSrc/gcs 4.18
382 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
383 TestPreload/PreloadSrc/github 4.44
384 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
386 TestPreload/PreloadSrc/gcs-cached 0.56
387 TestStartStop/group/newest-cni/serial/DeployApp 0
389 TestStartStop/group/newest-cni/serial/Stop 2.54
390 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
391 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
392 TestStartStop/group/newest-cni/serial/SecondStart 10.05
393 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
394 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
396 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
397 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
398 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
x
+
TestDownloadOnly/v1.28.0/json-events (4.85s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-320689 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-320689 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.84905579s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.85s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0110 08:19:59.242362    7183 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0110 08:19:59.242432    7183 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-320689
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-320689: exit status 85 (73.580217ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-320689 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-320689 │ jenkins │ v1.37.0 │ 10 Jan 26 08:19 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 08:19:54
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 08:19:54.442750    7194 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:19:54.443480    7194 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:19:54.443491    7194 out.go:374] Setting ErrFile to fd 2...
	I0110 08:19:54.443495    7194 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:19:54.443656    7194 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	W0110 08:19:54.443782    7194 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22427-3641/.minikube/config/config.json: open /home/jenkins/minikube-integration/22427-3641/.minikube/config/config.json: no such file or directory
	I0110 08:19:54.444242    7194 out.go:368] Setting JSON to true
	I0110 08:19:54.445039    7194 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":146,"bootTime":1768033048,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 08:19:54.445098    7194 start.go:143] virtualization: kvm guest
	I0110 08:19:54.449071    7194 out.go:99] [download-only-320689] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W0110 08:19:54.449224    7194 preload.go:372] Failed to list preload files: open /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball: no such file or directory
	I0110 08:19:54.449231    7194 notify.go:221] Checking for updates...
	I0110 08:19:54.450542    7194 out.go:171] MINIKUBE_LOCATION=22427
	I0110 08:19:54.451840    7194 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 08:19:54.453211    7194 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:19:54.454303    7194 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-3641/.minikube
	I0110 08:19:54.455966    7194 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0110 08:19:54.458269    7194 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0110 08:19:54.458474    7194 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 08:19:54.481659    7194 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 08:19:54.481791    7194 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:19:54.694888    7194 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2026-01-10 08:19:54.685813717 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:19:54.694991    7194 docker.go:319] overlay module found
	I0110 08:19:54.696538    7194 out.go:99] Using the docker driver based on user configuration
	I0110 08:19:54.696562    7194 start.go:309] selected driver: docker
	I0110 08:19:54.696568    7194 start.go:928] validating driver "docker" against <nil>
	I0110 08:19:54.696638    7194 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:19:54.752599    7194 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2026-01-10 08:19:54.74213105 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:19:54.752748    7194 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 08:19:54.753264    7194 start_flags.go:417] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0110 08:19:54.753415    7194 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0110 08:19:54.755051    7194 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-320689 host does not exist
	  To start a cluster, run: "minikube start -p download-only-320689"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-320689
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/json-events (2.71s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-241766 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-241766 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (2.713375075s)
--- PASS: TestDownloadOnly/v1.35.0/json-events (2.71s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/preload-exists
I0110 08:20:02.382017    7183 preload.go:188] Checking if preload exists for k8s version v1.35.0 and runtime crio
I0110 08:20:02.382043    7183 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22427-3641/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-241766
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-241766: exit status 85 (70.296169ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-320689 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-320689 │ jenkins │ v1.37.0 │ 10 Jan 26 08:19 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 10 Jan 26 08:19 UTC │ 10 Jan 26 08:19 UTC │
	│ delete  │ -p download-only-320689                                                                                                                                                   │ download-only-320689 │ jenkins │ v1.37.0 │ 10 Jan 26 08:19 UTC │ 10 Jan 26 08:19 UTC │
	│ start   │ -o=json --download-only -p download-only-241766 --force --alsologtostderr --kubernetes-version=v1.35.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-241766 │ jenkins │ v1.37.0 │ 10 Jan 26 08:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2026/01/10 08:19:59
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0110 08:19:59.719999    7551 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:19:59.720327    7551 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:19:59.720339    7551 out.go:374] Setting ErrFile to fd 2...
	I0110 08:19:59.720344    7551 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:19:59.720527    7551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:19:59.721017    7551 out.go:368] Setting JSON to true
	I0110 08:19:59.721871    7551 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":152,"bootTime":1768033048,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 08:19:59.721922    7551 start.go:143] virtualization: kvm guest
	I0110 08:19:59.723675    7551 out.go:99] [download-only-241766] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 08:19:59.723834    7551 notify.go:221] Checking for updates...
	I0110 08:19:59.725266    7551 out.go:171] MINIKUBE_LOCATION=22427
	I0110 08:19:59.726436    7551 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 08:19:59.727555    7551 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:19:59.728623    7551 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-3641/.minikube
	I0110 08:19:59.729583    7551 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0110 08:19:59.731753    7551 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0110 08:19:59.731940    7551 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 08:19:59.754039    7551 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 08:19:59.754126    7551 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:19:59.808577    7551 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2026-01-10 08:19:59.798528968 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:19:59.808680    7551 docker.go:319] overlay module found
	I0110 08:19:59.810243    7551 out.go:99] Using the docker driver based on user configuration
	I0110 08:19:59.810270    7551 start.go:309] selected driver: docker
	I0110 08:19:59.810275    7551 start.go:928] validating driver "docker" against <nil>
	I0110 08:19:59.810345    7551 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:19:59.863984    7551 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2026-01-10 08:19:59.854092635 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:19:59.864141    7551 start_flags.go:333] no existing cluster config was found, will generate one from the flags 
	I0110 08:19:59.864647    7551 start_flags.go:417] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0110 08:19:59.864812    7551 start_flags.go:1001] Wait components to verify : map[apiserver:true system_pods:true]
	I0110 08:19:59.866551    7551 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-241766 host does not exist
	  To start a cluster, run: "minikube start -p download-only-241766"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-241766
--- PASS: TestDownloadOnly/v1.35.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.4s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-033678 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "download-docker-033678" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-033678
--- PASS: TestDownloadOnlyKic (0.40s)

                                                
                                    
x
+
TestBinaryMirror (0.8s)

                                                
                                                
=== RUN   TestBinaryMirror
I0110 08:20:03.485544    7183 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-934346 --alsologtostderr --binary-mirror http://127.0.0.1:36511 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-934346" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-934346
--- PASS: TestBinaryMirror (0.80s)

                                                
                                    
x
+
TestOffline (61.76s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-669446 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-669446 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (59.326292761s)
helpers_test.go:176: Cleaning up "offline-crio-669446" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-669446
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-669446: (2.430868029s)
--- PASS: TestOffline (61.76s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-910183
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-910183: exit status 85 (61.794568ms)

                                                
                                                
-- stdout --
	* Profile "addons-910183" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-910183"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-910183
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-910183: exit status 85 (64.721832ms)

                                                
                                                
-- stdout --
	* Profile "addons-910183" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-910183"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (90.17s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-910183 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-910183 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (1m30.168364869s)
--- PASS: TestAddons/Setup (90.17s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-910183 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-910183 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (7.4s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-910183 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-910183 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [5622c18b-bb8e-4a63-9ff1-ce28d7ec8b94] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [5622c18b-bb8e-4a63-9ff1-ce28d7ec8b94] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 7.003644195s
addons_test.go:696: (dbg) Run:  kubectl --context addons-910183 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-910183 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-910183 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (7.40s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.67s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-910183
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-910183: (16.385570564s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-910183
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-910183
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-910183
--- PASS: TestAddons/StoppedEnableDisable (16.67s)

                                                
                                    
x
+
TestCertOptions (27.77s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-866817 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-866817 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (24.461355432s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-866817 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-866817 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-866817 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-866817" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-866817
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-866817: (2.491066311s)
--- PASS: TestCertOptions (27.77s)

                                                
                                    
x
+
TestCertExpiration (211.37s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-396514 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-396514 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (21.733126513s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-396514 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-396514 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (6.015149262s)
helpers_test.go:176: Cleaning up "cert-expiration-396514" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-396514
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-396514: (3.617591273s)
--- PASS: TestCertExpiration (211.37s)

                                                
                                    
x
+
TestForceSystemdFlag (22.28s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-776977 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E0110 08:46:57.723298    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/functional-648443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-776977 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (19.632902763s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-776977 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-776977" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-776977
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-776977: (2.368507736s)
--- PASS: TestForceSystemdFlag (22.28s)

                                                
                                    
x
+
TestForceSystemdEnv (22.44s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-144510 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-144510 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (18.027691919s)
helpers_test.go:176: Cleaning up "force-systemd-env-144510" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-144510
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-144510: (4.416812898s)
--- PASS: TestForceSystemdEnv (22.44s)

                                                
                                    
x
+
TestErrorSpam/setup (18.76s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-104024 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-104024 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-104024 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-104024 --driver=docker  --container-runtime=crio: (18.760107719s)
--- PASS: TestErrorSpam/setup (18.76s)

                                                
                                    
x
+
TestErrorSpam/start (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104024 --log_dir /tmp/nospam-104024 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104024 --log_dir /tmp/nospam-104024 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104024 --log_dir /tmp/nospam-104024 start --dry-run
--- PASS: TestErrorSpam/start (0.65s)

                                                
                                    
x
+
TestErrorSpam/status (0.94s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104024 --log_dir /tmp/nospam-104024 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104024 --log_dir /tmp/nospam-104024 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104024 --log_dir /tmp/nospam-104024 status
--- PASS: TestErrorSpam/status (0.94s)

                                                
                                    
x
+
TestErrorSpam/pause (5.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104024 --log_dir /tmp/nospam-104024 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-104024 --log_dir /tmp/nospam-104024 pause: exit status 80 (2.243342229s)

                                                
                                                
-- stdout --
	* Pausing node nospam-104024 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:23:13Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-104024 --log_dir /tmp/nospam-104024 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104024 --log_dir /tmp/nospam-104024 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-104024 --log_dir /tmp/nospam-104024 pause: exit status 80 (1.59909336s)

                                                
                                                
-- stdout --
	* Pausing node nospam-104024 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:23:15Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-104024 --log_dir /tmp/nospam-104024 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104024 --log_dir /tmp/nospam-104024 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-104024 --log_dir /tmp/nospam-104024 pause: exit status 80 (1.800505453s)

                                                
                                                
-- stdout --
	* Pausing node nospam-104024 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:23:17Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-104024 --log_dir /tmp/nospam-104024 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.19s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104024 --log_dir /tmp/nospam-104024 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-104024 --log_dir /tmp/nospam-104024 unpause: exit status 80 (1.347167334s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-104024 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:23:18Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-104024 --log_dir /tmp/nospam-104024 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104024 --log_dir /tmp/nospam-104024 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-104024 --log_dir /tmp/nospam-104024 unpause: exit status 80 (1.65095512s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-104024 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:23:20Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-104024 --log_dir /tmp/nospam-104024 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104024 --log_dir /tmp/nospam-104024 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-104024 --log_dir /tmp/nospam-104024 unpause: exit status 80 (2.194183373s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-104024 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2026-01-10T08:23:22Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-104024 --log_dir /tmp/nospam-104024 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.19s)

                                                
                                    
x
+
TestErrorSpam/stop (12.57s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104024 --log_dir /tmp/nospam-104024 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-104024 --log_dir /tmp/nospam-104024 stop: (12.371043756s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104024 --log_dir /tmp/nospam-104024 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-104024 --log_dir /tmp/nospam-104024 stop
--- PASS: TestErrorSpam/stop (12.57s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1865: local sync path: /home/jenkins/minikube-integration/22427-3641/.minikube/files/etc/test/nested/copy/7183/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (35.12s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2244: (dbg) Run:  out/minikube-linux-amd64 start -p functional-648443 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2244: (dbg) Done: out/minikube-linux-amd64 start -p functional-648443 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (35.120831138s)
--- PASS: TestFunctional/serial/StartWithProxy (35.12s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0110 08:24:14.909309    7183 config.go:182] Loaded profile config "functional-648443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-648443 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-648443 --alsologtostderr -v=8: (5.996446657s)
functional_test.go:678: soft start took 5.998937148s for "functional-648443" cluster.
I0110 08:24:20.908190    7183 config.go:182] Loaded profile config "functional-648443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/SoftStart (6.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-648443 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 cache add registry.k8s.io/pause:3.1
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 cache add registry.k8s.io/pause:3.3
functional_test.go:1069: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.87s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1097: (dbg) Run:  docker build -t minikube-local-cache-test:functional-648443 /tmp/TestFunctionalserialCacheCmdcacheadd_local1107912238/001
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 cache add minikube-local-cache-test:functional-648443
functional_test.go:1114: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 cache delete minikube-local-cache-test:functional-648443
functional_test.go:1103: (dbg) Run:  docker rmi minikube-local-cache-test:functional-648443
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.87s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1122: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1130: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1144: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1167: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1173: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-648443 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (274.130578ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 cache reload
functional_test.go:1183: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1192: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 kubectl -- --context functional-648443 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-648443 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (61.57s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-648443 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-648443 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m1.572126747s)
functional_test.go:776: restart took 1m1.572289931s for "functional-648443" cluster.
I0110 08:25:28.119275    7183 config.go:182] Loaded profile config "functional-648443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestFunctional/serial/ExtraConfig (61.57s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-648443 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1256: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 logs
functional_test.go:1256: (dbg) Done: out/minikube-linux-amd64 -p functional-648443 logs: (1.169584456s)
--- PASS: TestFunctional/serial/LogsCmd (1.17s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 logs --file /tmp/TestFunctionalserialLogsFileCmd2058790883/001/logs.txt
functional_test.go:1270: (dbg) Done: out/minikube-linux-amd64 -p functional-648443 logs --file /tmp/TestFunctionalserialLogsFileCmd2058790883/001/logs.txt: (1.208075158s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.21s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.94s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2331: (dbg) Run:  kubectl --context functional-648443 apply -f testdata/invalidsvc.yaml
functional_test.go:2345: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-648443
functional_test.go:2345: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-648443: exit status 115 (339.597807ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30293 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2337: (dbg) Run:  kubectl --context functional-648443 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.94s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-648443 config get cpus: exit status 14 (89.439997ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 config set cpus 2
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 config get cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 config unset cpus
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 config get cpus
functional_test.go:1219: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-648443 config get cpus: exit status 14 (78.474429ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (5.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 0 -p functional-648443 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 0 -p functional-648443 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 42721: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (5.93s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:994: (dbg) Run:  out/minikube-linux-amd64 start -p functional-648443 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:994: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-648443 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (521.288844ms)

                                                
                                                
-- stdout --
	* [functional-648443] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22427-3641/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-3641/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:25:44.513871   41571 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:25:44.514172   41571 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:25:44.514184   41571 out.go:374] Setting ErrFile to fd 2...
	I0110 08:25:44.514190   41571 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:25:44.514466   41571 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:25:44.514975   41571 out.go:368] Setting JSON to false
	I0110 08:25:44.516170   41571 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":497,"bootTime":1768033048,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 08:25:44.516240   41571 start.go:143] virtualization: kvm guest
	I0110 08:25:44.590850   41571 out.go:179] * [functional-648443] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 08:25:44.655285   41571 notify.go:221] Checking for updates...
	I0110 08:25:44.655446   41571 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 08:25:44.698988   41571 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 08:25:44.772048   41571 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:25:44.837865   41571 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-3641/.minikube
	I0110 08:25:44.839334   41571 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 08:25:44.841793   41571 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 08:25:44.843448   41571 config.go:182] Loaded profile config "functional-648443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:25:44.844225   41571 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 08:25:44.874713   41571 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 08:25:44.875016   41571 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:25:44.955025   41571 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2026-01-10 08:25:44.943452942 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:25:44.955157   41571 docker.go:319] overlay module found
	I0110 08:25:44.956579   41571 out.go:179] * Using the docker driver based on existing profile
	I0110 08:25:44.957529   41571 start.go:309] selected driver: docker
	I0110 08:25:44.957547   41571 start.go:928] validating driver "docker" against &{Name:functional-648443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-648443 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:25:44.957670   41571 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 08:25:44.959647   41571 out.go:203] 
	W0110 08:25:44.960780   41571 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0110 08:25:44.962558   41571 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 start -p functional-648443 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1040: (dbg) Run:  out/minikube-linux-amd64 start -p functional-648443 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1040: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-648443 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (194.944631ms)

                                                
                                                
-- stdout --
	* [functional-648443] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22427-3641/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-3641/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:25:44.839778   41618 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:25:44.839894   41618 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:25:44.839904   41618 out.go:374] Setting ErrFile to fd 2...
	I0110 08:25:44.839928   41618 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:25:44.840351   41618 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:25:44.840924   41618 out.go:368] Setting JSON to false
	I0110 08:25:44.842150   41618 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":497,"bootTime":1768033048,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 08:25:44.842247   41618 start.go:143] virtualization: kvm guest
	I0110 08:25:44.843682   41618 out.go:179] * [functional-648443] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0110 08:25:44.844874   41618 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 08:25:44.844921   41618 notify.go:221] Checking for updates...
	I0110 08:25:44.847079   41618 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 08:25:44.848341   41618 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:25:44.849694   41618 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-3641/.minikube
	I0110 08:25:44.850940   41618 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 08:25:44.852235   41618 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 08:25:44.853943   41618 config.go:182] Loaded profile config "functional-648443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:25:44.854650   41618 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 08:25:44.886232   41618 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 08:25:44.886359   41618 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:25:44.966101   41618 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2026-01-10 08:25:44.954900744 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:25:44.966237   41618 docker.go:319] overlay module found
	I0110 08:25:44.967758   41618 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0110 08:25:44.969041   41618 start.go:309] selected driver: docker
	I0110 08:25:44.969059   41618 start.go:928] validating driver "docker" against &{Name:functional-648443 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1767944074-22401@sha256:5af296c365892fa7c4c61cd02bf3cdb33e2c362939e717d7686924b3b3f07773 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0 ClusterName:functional-648443 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s Rosetta:false}
	I0110 08:25:44.969200   41618 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 08:25:44.971284   41618 out.go:203] 
	W0110 08:25:44.972512   41618 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0110 08:25:44.973601   41618 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1641: (dbg) Run:  kubectl --context functional-648443 create deployment hello-node-connect --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1645: (dbg) Run:  kubectl --context functional-648443 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-5d95464fd4-zbdqh" [27ae4452-0fd0-41d9-976c-29b7ff0bb1b2] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-5d95464fd4-zbdqh" [27ae4452-0fd0-41d9-976c-29b7ff0bb1b2] Running
functional_test.go:1650: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.002961125s
functional_test.go:1659: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 service hello-node-connect --url
functional_test.go:1665: found endpoint for hello-node-connect: http://192.168.49.2:32021
functional_test.go:1685: http://192.168.49.2:32021: success! body:
Request served by hello-node-connect-5d95464fd4-zbdqh

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:32021
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.74s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1700: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 addons list
functional_test.go:1712: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (22.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [a9ef3269-b3d2-4eb7-b7e1-71447d95e39d] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004025762s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-648443 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-648443 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-648443 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-648443 apply -f testdata/storage-provisioner/pod.yaml
I0110 08:25:58.465947    7183 detect.go:211] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [3045e017-0aa9-4b2f-92f9-eb92ebd6f396] Pending
helpers_test.go:353: "sp-pod" [3045e017-0aa9-4b2f-92f9-eb92ebd6f396] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [3045e017-0aa9-4b2f-92f9-eb92ebd6f396] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.002583676s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-648443 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-648443 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-648443 delete -f testdata/storage-provisioner/pod.yaml: (1.057824032s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-648443 apply -f testdata/storage-provisioner/pod.yaml
I0110 08:26:07.725031    7183 detect.go:211] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [4456997b-c2bf-4d28-9436-3c1a6377cfb3] Pending
helpers_test.go:353: "sp-pod" [4456997b-c2bf-4d28-9436-3c1a6377cfb3] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003791642s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-648443 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (22.72s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1735: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 ssh "echo hello"
functional_test.go:1752: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 ssh -n functional-648443 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 cp functional-648443:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd643856766/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 ssh -n functional-648443 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 ssh -n functional-648443 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1803: (dbg) Run:  kubectl --context functional-648443 replace --force -f testdata/mysql.yaml
functional_test.go:1809: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-l6kn5" [b2d8d8a0-b042-44b9-8d18-8092b665e10c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-l6kn5" [b2d8d8a0-b042-44b9-8d18-8092b665e10c] Running
functional_test.go:1809: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 14.005182115s
functional_test.go:1817: (dbg) Run:  kubectl --context functional-648443 exec mysql-7d7b65bc95-l6kn5 -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-648443 exec mysql-7d7b65bc95-l6kn5 -- mysql -ppassword -e "show databases;": exit status 1 (123.147708ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-648443 exec mysql-7d7b65bc95-l6kn5 -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-648443 exec mysql-7d7b65bc95-l6kn5 -- mysql -ppassword -e "show databases;": exit status 1 (115.045645ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-648443 exec mysql-7d7b65bc95-l6kn5 -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-648443 exec mysql-7d7b65bc95-l6kn5 -- mysql -ppassword -e "show databases;": exit status 1 (133.299995ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-648443 exec mysql-7d7b65bc95-l6kn5 -- mysql -ppassword -e "show databases;"
functional_test.go:1817: (dbg) Non-zero exit: kubectl --context functional-648443 exec mysql-7d7b65bc95-l6kn5 -- mysql -ppassword -e "show databases;": exit status 1 (84.985804ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1817: (dbg) Run:  kubectl --context functional-648443 exec mysql-7d7b65bc95-l6kn5 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.95s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1939: Checking for existence of /etc/test/nested/copy/7183/hosts within VM
functional_test.go:1941: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 ssh "sudo cat /etc/test/nested/copy/7183/hosts"
functional_test.go:1946: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1982: Checking for existence of /etc/ssl/certs/7183.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 ssh "sudo cat /etc/ssl/certs/7183.pem"
functional_test.go:1982: Checking for existence of /usr/share/ca-certificates/7183.pem within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 ssh "sudo cat /usr/share/ca-certificates/7183.pem"
functional_test.go:1982: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1983: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/71832.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 ssh "sudo cat /etc/ssl/certs/71832.pem"
functional_test.go:2009: Checking for existence of /usr/share/ca-certificates/71832.pem within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 ssh "sudo cat /usr/share/ca-certificates/71832.pem"
functional_test.go:2009: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2010: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-648443 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2037: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 ssh "sudo systemctl is-active docker"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-648443 ssh "sudo systemctl is-active docker": exit status 1 (295.156587ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2037: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 ssh "sudo systemctl is-active containerd"
functional_test.go:2037: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-648443 ssh "sudo systemctl is-active containerd": exit status 1 (286.605506ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2298: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-648443 create deployment hello-node --image ghcr.io/medyagh/image-mirrors/kicbase/echo-server
functional_test.go:1460: (dbg) Run:  kubectl --context functional-648443 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-684ffdf98c-5rd4d" [fa0e09e2-6105-4c49-9d10-b11b4020bc2a] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-684ffdf98c-5rd4d" [fa0e09e2-6105-4c49-9d10-b11b4020bc2a] Running
functional_test.go:1465: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003139187s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1295: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1330: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1335: Took "385.161741ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1344: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1349: Took "70.772895ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (14.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-648443 /tmp/TestFunctionalparallelMountCmdany-port1600279188/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1768033536290377523" to /tmp/TestFunctionalparallelMountCmdany-port1600279188/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1768033536290377523" to /tmp/TestFunctionalparallelMountCmdany-port1600279188/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1768033536290377523" to /tmp/TestFunctionalparallelMountCmdany-port1600279188/001/test-1768033536290377523
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-648443 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (303.936277ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0110 08:25:36.594612    7183 retry.go:84] will retry after 700ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 10 08:25 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 10 08:25 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 10 08:25 test-1768033536290377523
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 ssh cat /mount-9p/test-1768033536290377523
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-648443 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [3466265b-af98-47c0-b2bb-841c2751c0d4] Pending
helpers_test.go:353: "busybox-mount" [3466265b-af98-47c0-b2bb-841c2751c0d4] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [3466265b-af98-47c0-b2bb-841c2751c0d4] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [3466265b-af98-47c0-b2bb-841c2751c0d4] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 11.004461137s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-648443 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-648443 /tmp/TestFunctionalparallelMountCmdany-port1600279188/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (14.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1381: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1386: Took "335.858853ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1394: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1399: Took "55.735873ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1474: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1504: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 service list -o json
functional_test.go:1509: Took "587.857432ms" to run "out/minikube-linux-amd64 -p functional-648443 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1524: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 service --namespace=default --https --url hello-node
functional_test.go:1537: found endpoint: https://192.168.49.2:31917
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1574: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 service hello-node --url
functional_test.go:1580: found endpoint for hello-node: http://192.168.49.2:31917
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-648443 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0
registry.k8s.io/kube-proxy:v1.35.0
registry.k8s.io/kube-controller-manager:v1.35.0
registry.k8s.io/kube-apiserver:v1.35.0
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-648443
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-648443
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-648443 image ls --format short --alsologtostderr:
I0110 08:25:55.090506   45972 out.go:360] Setting OutFile to fd 1 ...
I0110 08:25:55.090789   45972 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 08:25:55.090800   45972 out.go:374] Setting ErrFile to fd 2...
I0110 08:25:55.090805   45972 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 08:25:55.091008   45972 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
I0110 08:25:55.091570   45972 config.go:182] Loaded profile config "functional-648443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 08:25:55.091699   45972 config.go:182] Loaded profile config "functional-648443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 08:25:55.092142   45972 cli_runner.go:164] Run: docker container inspect functional-648443 --format={{.State.Status}}
I0110 08:25:55.112940   45972 ssh_runner.go:195] Run: systemctl --version
I0110 08:25:55.112982   45972 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-648443
I0110 08:25:55.130549   45972 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/functional-648443/id_rsa Username:docker}
I0110 08:25:55.222323   45972 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-648443 image ls --format table --alsologtostderr:
┌───────────────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                       IMAGE                       │                  TAG                  │   IMAGE ID    │  SIZE  │
├───────────────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-proxy                        │ v1.35.0                               │ 32652ff1bbe6b │ 72MB   │
│ registry.k8s.io/pause                             │ 3.3                                   │ 0184c1613d929 │ 686kB  │
│ registry.k8s.io/pause                             │ latest                                │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd                        │ v20250512-df8de77b                    │ 409467f978b4a │ 109MB  │
│ localhost/minikube-local-cache-test               │ functional-648443                     │ 089f59febd3d4 │ 3.33kB │
│ public.ecr.aws/docker/library/mysql               │ 8.4                                   │ 54c6e074ef93c │ 804MB  │
│ public.ecr.aws/nginx/nginx                        │ alpine                                │ b9d44994d8add │ 63.3MB │
│ registry.k8s.io/etcd                              │ 3.6.6-0                               │ 0a108f7189562 │ 63.6MB │
│ gcr.io/k8s-minikube/busybox                       │ latest                                │ beae173ccac6a │ 1.46MB │
│ gcr.io/k8s-minikube/storage-provisioner           │ v5                                    │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-controller-manager           │ v1.35.0                               │ 2c9a4b058bd7e │ 76.9MB │
│ registry.k8s.io/pause                             │ 3.1                                   │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                             │ 3.10.1                                │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/coredns/coredns                   │ v1.13.1                               │ aa5e3ebc0dfed │ 79.2MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ functional-648443                     │ 9056ab77afb8e │ 4.95MB │
│ ghcr.io/medyagh/image-mirrors/kicbase/echo-server │ latest                                │ 9056ab77afb8e │ 4.95MB │
│ localhost/my-image                                │ functional-648443                     │ a63a581665c1b │ 1.47MB │
│ registry.k8s.io/kube-scheduler                    │ v1.35.0                               │ 550794e3b12ac │ 52.8MB │
│ docker.io/kindest/kindnetd                        │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ 4921d7a6dffa9 │ 108MB  │
│ gcr.io/k8s-minikube/busybox                       │ 1.28.4-glibc                          │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/kube-apiserver                    │ v1.35.0                               │ 5c6acd67e9cd1 │ 90.8MB │
└───────────────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-648443 image ls --format table --alsologtostderr:
I0110 08:25:58.340715   46727 out.go:360] Setting OutFile to fd 1 ...
I0110 08:25:58.340861   46727 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 08:25:58.340872   46727 out.go:374] Setting ErrFile to fd 2...
I0110 08:25:58.340878   46727 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 08:25:58.341106   46727 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
I0110 08:25:58.341672   46727 config.go:182] Loaded profile config "functional-648443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 08:25:58.341793   46727 config.go:182] Loaded profile config "functional-648443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 08:25:58.342209   46727 cli_runner.go:164] Run: docker container inspect functional-648443 --format={{.State.Status}}
I0110 08:25:58.363303   46727 ssh_runner.go:195] Run: systemctl --version
I0110 08:25:58.363372   46727 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-648443
I0110 08:25:58.382630   46727 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/functional-648443/id_rsa Username:docker}
I0110 08:25:58.479594   46727 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-648443 image ls --format json --alsologtostderr:
[{"id":"5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499","repoDigests":["registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3","registry.k8s.io/kube-apiserver@sha256:50e01ce089b6b6508e2f68ba0da943a3bc4134596e7e2afaac27dd26f71aca7a"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0"],"size":"90844140"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b809
4a7e4e568ca9b1869c71b053cdf8b5dc3029","docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"a0ea23a884af6181f8477bdaa43c8925d256f5e69c03aade6373f66003fc50a5","repoDigests":["docker.io/library/f83b005ce48336d1e1cb7e11044c0777d601c445d750421268f83e470aa0277e-tmp@sha256:5b748db72301a6d53a1b71e04a2d9cd3ebf36bc2f9e9f40d269af3ed95b7c57e"],"repoTags":[],"size":"1466132"},{"id":"089f59febd3d43015911068a3abfd06268cd60a1f2d1b297a4523da1539f465e","repoDigests":["localhost/minikube-local-cache-test@sha256:58c5c3e6f09a5cbff93c92afa8e1942debd4c42d3c02cc6c8d3ac59dcd85d986"],"repoTags":["lo
calhost/minikube-local-cache-test:functional-648443"],"size":"3330"},{"id":"54c6e074ef93c709bfd8e76a38f54a65e9b5a38d25c9cf82e2633a21f89cd009","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:615302383ec847282233669b4c18396aa075b1279ff7729af0dcd99784361659","public.ecr.aws/docker/library/mysql@sha256:90544b3775490579867a30988d48f0215fc3b88d78d8d62b2c0d96ee9226a2b7"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803768460"},{"id":"2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111","registry.k8s.io/kube-controller-manager@sha256:e0ce4c7d278a001734bbd8020ed1b7e535ae9d2412c700032eb3df190ea91a62"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0"],"size":"76893520"},{"id":"32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8","repoDigests":["registry.k8s.io/kube-proxy@sha256:ad87ae17f92f26144bd5a35fc86a73f2fae6effd1666db51b
c03f8e9213de532","registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0"],"size":"71986585"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb
06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251","repoDigests":["docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27","docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"107598204"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998","gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d3
60bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"a63a581665c1bca278d540ad9266a99d197c943ed6593a13031df9ec698b1616","repoDigests":["localhost/my-image@sha256:79968369285dfa072df893b6566f0d7fcae9b79d15fe5116ebdec8dc1c8197fd"],"repoTags":["localhost/my-image:functional-648443"],"size":"1468744"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58
c1127ab47819f","registry.k8s.io/kube-scheduler@sha256:dd2b6a420b171e83748166a66372f43384b3142fc4f6f56a6240a9e152cccd69"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0"],"size":"52763986"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0a
ae68296150078d379c30cf"],"repoTags":["ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-648443","ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest"],"size":"4945146"},{"id":"b9d44994d8adde234cc849b6518ae39e786c40b1a7c9cc1de674fb3e7f913fc2","repoDigests":["public.ecr.aws/nginx/nginx@sha256:92e3aff70715f47c5c05580bbe7ed66cb0625814e71b8885ccdbb6d89496f87f","public.ecr.aws/nginx/nginx@sha256:a6fbdb4b73007c40f67bfc798a2045503b634f9c53e8309396e5aaf38c418ac0"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"63312028"},{"id":"0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2","repoDigests":["registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a","registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"63582405"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-648443 image ls --format json --alsologtostderr:
I0110 08:25:58.102520   46647 out.go:360] Setting OutFile to fd 1 ...
I0110 08:25:58.102867   46647 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 08:25:58.102879   46647 out.go:374] Setting ErrFile to fd 2...
I0110 08:25:58.102886   46647 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 08:25:58.103195   46647 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
I0110 08:25:58.103969   46647 config.go:182] Loaded profile config "functional-648443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 08:25:58.104137   46647 config.go:182] Loaded profile config "functional-648443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 08:25:58.104726   46647 cli_runner.go:164] Run: docker container inspect functional-648443 --format={{.State.Status}}
I0110 08:25:58.123707   46647 ssh_runner.go:195] Run: systemctl --version
I0110 08:25:58.123776   46647 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-648443
I0110 08:25:58.141997   46647 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/functional-648443/id_rsa Username:docker}
I0110 08:25:58.237644   46647 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-648443 image ls --format yaml --alsologtostderr:
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "249229937"
- id: b9d44994d8adde234cc849b6518ae39e786c40b1a7c9cc1de674fb3e7f913fc2
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:92e3aff70715f47c5c05580bbe7ed66cb0625814e71b8885ccdbb6d89496f87f
- public.ecr.aws/nginx/nginx@sha256:a6fbdb4b73007c40f67bfc798a2045503b634f9c53e8309396e5aaf38c418ac0
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "63312028"
- id: 0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2
repoDigests:
- registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "63582405"
- id: 5c6acd67e9cd10eb60a246cd233db251ef62ea97e6572f897e873f0cb648f499
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:32f98b308862e1cf98c900927d84630fb86a836a480f02752a779eb85c1489f3
- registry.k8s.io/kube-apiserver@sha256:50e01ce089b6b6508e2f68ba0da943a3bc4134596e7e2afaac27dd26f71aca7a
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0
size: "90844140"
- id: 32652ff1bbe6b6df16a3dc621fcad4e6e185a2852bffc5847aab79e36c54bfb8
repoDigests:
- registry.k8s.io/kube-proxy@sha256:ad87ae17f92f26144bd5a35fc86a73f2fae6effd1666db51bc03f8e9213de532
- registry.k8s.io/kube-proxy@sha256:c818ca1eff765e35348b77e484da915175cdf483f298e1f9885ed706fcbcb34c
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0
size: "71986585"
- id: 550794e3b12ac21ec7fd940bdfb45f7c8dae3c52acd8c70e580b882de20c3dcc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0ab622491a82532e01876d55e365c08c5bac01bcd5444a8ed58c1127ab47819f
- registry.k8s.io/kube-scheduler@sha256:dd2b6a420b171e83748166a66372f43384b3142fc4f6f56a6240a9e152cccd69
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0
size: "52763986"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 089f59febd3d43015911068a3abfd06268cd60a1f2d1b297a4523da1539f465e
repoDigests:
- localhost/minikube-local-cache-test@sha256:58c5c3e6f09a5cbff93c92afa8e1942debd4c42d3c02cc6c8d3ac59dcd85d986
repoTags:
- localhost/minikube-local-cache-test:functional-648443
size: "3330"
- id: 2c9a4b058bd7e6ec479c38f9e8a1dac2f5ee5b0a3ebda6dfac968e8720229508
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:3e343fd915d2e214b9a68c045b94017832927edb89aafa471324f8d05a191111
- registry.k8s.io/kube-controller-manager@sha256:e0ce4c7d278a001734bbd8020ed1b7e535ae9d2412c700032eb3df190ea91a62
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0
size: "76893520"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-648443
- ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
size: "4945146"
- id: 54c6e074ef93c709bfd8e76a38f54a65e9b5a38d25c9cf82e2633a21f89cd009
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:615302383ec847282233669b4c18396aa075b1279ff7729af0dcd99784361659
- public.ecr.aws/docker/library/mysql@sha256:90544b3775490579867a30988d48f0215fc3b88d78d8d62b2c0d96ee9226a2b7
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803768460"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251
repoDigests:
- docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "107598204"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-648443 image ls --format yaml --alsologtostderr:
I0110 08:25:55.314547   46027 out.go:360] Setting OutFile to fd 1 ...
I0110 08:25:55.314812   46027 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 08:25:55.314821   46027 out.go:374] Setting ErrFile to fd 2...
I0110 08:25:55.314826   46027 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 08:25:55.315079   46027 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
I0110 08:25:55.315647   46027 config.go:182] Loaded profile config "functional-648443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 08:25:55.315772   46027 config.go:182] Loaded profile config "functional-648443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 08:25:55.316187   46027 cli_runner.go:164] Run: docker container inspect functional-648443 --format={{.State.Status}}
I0110 08:25:55.336659   46027 ssh_runner.go:195] Run: systemctl --version
I0110 08:25:55.336722   46027 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-648443
I0110 08:25:55.355117   46027 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/functional-648443/id_rsa Username:docker}
I0110 08:25:55.449345   46027 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-648443 ssh pgrep buildkitd: exit status 1 (268.983245ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 image build -t localhost/my-image:functional-648443 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-648443 image build -t localhost/my-image:functional-648443 testdata/build --alsologtostderr: (2.077237155s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-648443 image build -t localhost/my-image:functional-648443 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> a0ea23a884a
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-648443
--> a63a581665c
Successfully tagged localhost/my-image:functional-648443
a63a581665c1bca278d540ad9266a99d197c943ed6593a13031df9ec698b1616
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-648443 image build -t localhost/my-image:functional-648443 testdata/build --alsologtostderr:
I0110 08:25:55.810587   46206 out.go:360] Setting OutFile to fd 1 ...
I0110 08:25:55.810788   46206 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 08:25:55.810803   46206 out.go:374] Setting ErrFile to fd 2...
I0110 08:25:55.810809   46206 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0110 08:25:55.811081   46206 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
I0110 08:25:55.811913   46206 config.go:182] Loaded profile config "functional-648443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 08:25:55.812625   46206 config.go:182] Loaded profile config "functional-648443": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
I0110 08:25:55.813376   46206 cli_runner.go:164] Run: docker container inspect functional-648443 --format={{.State.Status}}
I0110 08:25:55.834888   46206 ssh_runner.go:195] Run: systemctl --version
I0110 08:25:55.834969   46206 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-648443
I0110 08:25:55.852244   46206 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/functional-648443/id_rsa Username:docker}
I0110 08:25:55.944800   46206 build_images.go:162] Building image from path: /tmp/build.2775037370.tar
I0110 08:25:55.944870   46206 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0110 08:25:55.952928   46206 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2775037370.tar
I0110 08:25:55.956882   46206 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2775037370.tar: stat -c "%s %y" /var/lib/minikube/build/build.2775037370.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2775037370.tar': No such file or directory
I0110 08:25:55.956913   46206 ssh_runner.go:362] scp /tmp/build.2775037370.tar --> /var/lib/minikube/build/build.2775037370.tar (3072 bytes)
I0110 08:25:55.973921   46206 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2775037370
I0110 08:25:55.981149   46206 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2775037370 -xf /var/lib/minikube/build/build.2775037370.tar
I0110 08:25:55.989354   46206 crio.go:315] Building image: /var/lib/minikube/build/build.2775037370
I0110 08:25:55.989426   46206 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-648443 /var/lib/minikube/build/build.2775037370 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0110 08:25:57.807572   46206 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-648443 /var/lib/minikube/build/build.2775037370 --cgroup-manager=cgroupfs: (1.818118697s)
I0110 08:25:57.807633   46206 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2775037370
I0110 08:25:57.815757   46206 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2775037370.tar
I0110 08:25:57.822978   46206 build_images.go:218] Built localhost/my-image:functional-648443 from /tmp/build.2775037370.tar
I0110 08:25:57.823010   46206 build_images.go:134] succeeded building to: functional-648443
I0110 08:25:57.823016   46206 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0 ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-648443
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-648443 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2280: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-648443 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag ghcr.io/medyagh/image-mirrors/kicbase/echo-server:latest ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-648443
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 image load --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-648443 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 image save ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-648443 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-648443 --alsologtostderr
functional_test.go:407: (dbg) Done: out/minikube-linux-amd64 -p functional-648443 image rm ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-648443 --alsologtostderr: (1.932406377s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-648443 /tmp/TestFunctionalparallelMountCmdspecific-port3423665734/001:/mount-9p --alsologtostderr -v=1 --port 36365]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-648443 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (287.226915ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0110 08:25:50.906193    7183 retry.go:84] will retry after 500ms: exit status 1
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 ssh -- ls -la /mount-9p
2026/01/10 08:25:51 [DEBUG] GET http://127.0.0.1:42487/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-648443 /tmp/TestFunctionalparallelMountCmdspecific-port3423665734/001:/mount-9p --alsologtostderr -v=1 --port 36365] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-648443 ssh "sudo umount -f /mount-9p": exit status 1 (279.942489ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-amd64 -p functional-648443 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-648443 /tmp/TestFunctionalparallelMountCmdspecific-port3423665734/001:/mount-9p --alsologtostderr -v=1 --port 36365] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-648443
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 image save --daemon ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-648443 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-648443
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-648443 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-648443 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-648443 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 44430: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-648443 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-648443 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1930910740/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-648443 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1930910740/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-648443 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1930910740/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-648443 ssh "findmnt -T" /mount1: exit status 1 (404.665293ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-648443 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-648443 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1930910740/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-648443 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1930910740/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-648443 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1930910740/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.63s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-648443 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-648443 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [0d9ba6f0-be31-4ae1-9b6f-1fa311a32fee] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [0d9ba6f0-be31-4ae1-9b6f-1fa311a32fee] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003477319s
I0110 08:26:01.280495    7183 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2129: (dbg) Run:  out/minikube-linux-amd64 -p functional-648443 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-648443 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.101.60 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-648443 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f ghcr.io/medyagh/image-mirrors/kicbase/echo-server:functional-648443
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-648443
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-648443
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (89.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0110 08:26:35.168985    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:26:35.174238    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:26:35.184556    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:26:35.204863    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:26:35.245128    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:26:35.325493    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:26:35.485928    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:26:35.806525    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:26:36.447444    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:26:37.728167    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:26:40.289152    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:26:45.409876    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:26:55.650840    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:27:16.131987    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-352743 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m28.541560989s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (89.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-352743 kubectl -- rollout status deployment/busybox: (2.382327897s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 kubectl -- exec busybox-769dd8b7dd-8tktf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 kubectl -- exec busybox-769dd8b7dd-9r9hc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 kubectl -- exec busybox-769dd8b7dd-sj489 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 kubectl -- exec busybox-769dd8b7dd-8tktf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 kubectl -- exec busybox-769dd8b7dd-9r9hc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 kubectl -- exec busybox-769dd8b7dd-sj489 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 kubectl -- exec busybox-769dd8b7dd-8tktf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 kubectl -- exec busybox-769dd8b7dd-9r9hc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 kubectl -- exec busybox-769dd8b7dd-sj489 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 kubectl -- exec busybox-769dd8b7dd-8tktf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 kubectl -- exec busybox-769dd8b7dd-8tktf -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 kubectl -- exec busybox-769dd8b7dd-9r9hc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 kubectl -- exec busybox-769dd8b7dd-9r9hc -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 kubectl -- exec busybox-769dd8b7dd-sj489 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 kubectl -- exec busybox-769dd8b7dd-sj489 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (26.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 node add --alsologtostderr -v 5
E0110 08:27:57.092876    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-352743 node add --alsologtostderr -v 5: (25.586253165s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (26.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-352743 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 cp testdata/cp-test.txt ha-352743:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 ssh -n ha-352743 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 cp ha-352743:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2638409145/001/cp-test_ha-352743.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 ssh -n ha-352743 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 cp ha-352743:/home/docker/cp-test.txt ha-352743-m02:/home/docker/cp-test_ha-352743_ha-352743-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 ssh -n ha-352743 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 ssh -n ha-352743-m02 "sudo cat /home/docker/cp-test_ha-352743_ha-352743-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 cp ha-352743:/home/docker/cp-test.txt ha-352743-m03:/home/docker/cp-test_ha-352743_ha-352743-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 ssh -n ha-352743 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 ssh -n ha-352743-m03 "sudo cat /home/docker/cp-test_ha-352743_ha-352743-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 cp ha-352743:/home/docker/cp-test.txt ha-352743-m04:/home/docker/cp-test_ha-352743_ha-352743-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 ssh -n ha-352743 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 ssh -n ha-352743-m04 "sudo cat /home/docker/cp-test_ha-352743_ha-352743-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 cp testdata/cp-test.txt ha-352743-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 ssh -n ha-352743-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 cp ha-352743-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2638409145/001/cp-test_ha-352743-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 ssh -n ha-352743-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 cp ha-352743-m02:/home/docker/cp-test.txt ha-352743:/home/docker/cp-test_ha-352743-m02_ha-352743.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 ssh -n ha-352743-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 ssh -n ha-352743 "sudo cat /home/docker/cp-test_ha-352743-m02_ha-352743.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 cp ha-352743-m02:/home/docker/cp-test.txt ha-352743-m03:/home/docker/cp-test_ha-352743-m02_ha-352743-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 ssh -n ha-352743-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 ssh -n ha-352743-m03 "sudo cat /home/docker/cp-test_ha-352743-m02_ha-352743-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 cp ha-352743-m02:/home/docker/cp-test.txt ha-352743-m04:/home/docker/cp-test_ha-352743-m02_ha-352743-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 ssh -n ha-352743-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 ssh -n ha-352743-m04 "sudo cat /home/docker/cp-test_ha-352743-m02_ha-352743-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 cp testdata/cp-test.txt ha-352743-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 ssh -n ha-352743-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 cp ha-352743-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2638409145/001/cp-test_ha-352743-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 ssh -n ha-352743-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 cp ha-352743-m03:/home/docker/cp-test.txt ha-352743:/home/docker/cp-test_ha-352743-m03_ha-352743.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 ssh -n ha-352743-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 ssh -n ha-352743 "sudo cat /home/docker/cp-test_ha-352743-m03_ha-352743.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 cp ha-352743-m03:/home/docker/cp-test.txt ha-352743-m02:/home/docker/cp-test_ha-352743-m03_ha-352743-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 ssh -n ha-352743-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 ssh -n ha-352743-m02 "sudo cat /home/docker/cp-test_ha-352743-m03_ha-352743-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 cp ha-352743-m03:/home/docker/cp-test.txt ha-352743-m04:/home/docker/cp-test_ha-352743-m03_ha-352743-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 ssh -n ha-352743-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 ssh -n ha-352743-m04 "sudo cat /home/docker/cp-test_ha-352743-m03_ha-352743-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 cp testdata/cp-test.txt ha-352743-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 ssh -n ha-352743-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 cp ha-352743-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2638409145/001/cp-test_ha-352743-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 ssh -n ha-352743-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 cp ha-352743-m04:/home/docker/cp-test.txt ha-352743:/home/docker/cp-test_ha-352743-m04_ha-352743.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 ssh -n ha-352743-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 ssh -n ha-352743 "sudo cat /home/docker/cp-test_ha-352743-m04_ha-352743.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 cp ha-352743-m04:/home/docker/cp-test.txt ha-352743-m02:/home/docker/cp-test_ha-352743-m04_ha-352743-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 ssh -n ha-352743-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 ssh -n ha-352743-m02 "sudo cat /home/docker/cp-test_ha-352743-m04_ha-352743-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 cp ha-352743-m04:/home/docker/cp-test.txt ha-352743-m03:/home/docker/cp-test_ha-352743-m04_ha-352743-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 ssh -n ha-352743-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 ssh -n ha-352743-m03 "sudo cat /home/docker/cp-test_ha-352743-m04_ha-352743-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (18.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-352743 node stop m02 --alsologtostderr -v 5: (18.0789776s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-352743 status --alsologtostderr -v 5: exit status 7 (685.595144ms)

                                                
                                                
-- stdout --
	ha-352743
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-352743-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-352743-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-352743-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:28:54.565665   66930 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:28:54.565781   66930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:28:54.565790   66930 out.go:374] Setting ErrFile to fd 2...
	I0110 08:28:54.565794   66930 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:28:54.565999   66930 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:28:54.566214   66930 out.go:368] Setting JSON to false
	I0110 08:28:54.566243   66930 mustload.go:66] Loading cluster: ha-352743
	I0110 08:28:54.566322   66930 notify.go:221] Checking for updates...
	I0110 08:28:54.566645   66930 config.go:182] Loaded profile config "ha-352743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:28:54.566664   66930 status.go:174] checking status of ha-352743 ...
	I0110 08:28:54.567195   66930 cli_runner.go:164] Run: docker container inspect ha-352743 --format={{.State.Status}}
	I0110 08:28:54.588588   66930 status.go:371] ha-352743 host status = "Running" (err=<nil>)
	I0110 08:28:54.588609   66930 host.go:66] Checking if "ha-352743" exists ...
	I0110 08:28:54.588865   66930 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-352743
	I0110 08:28:54.605838   66930 host.go:66] Checking if "ha-352743" exists ...
	I0110 08:28:54.606147   66930 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 08:28:54.606207   66930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-352743
	I0110 08:28:54.622475   66930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/ha-352743/id_rsa Username:docker}
	I0110 08:28:54.713668   66930 ssh_runner.go:195] Run: systemctl --version
	I0110 08:28:54.720555   66930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:28:54.732481   66930 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:28:54.784456   66930 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2026-01-10 08:28:54.775154163 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:28:54.784988   66930 kubeconfig.go:125] found "ha-352743" server: "https://192.168.49.254:8443"
	I0110 08:28:54.785014   66930 api_server.go:166] Checking apiserver status ...
	I0110 08:28:54.785046   66930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 08:28:54.797286   66930 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1234/cgroup
	I0110 08:28:54.805501   66930 ssh_runner.go:195] Run: sudo grep ^0:: /proc/1234/cgroup
	I0110 08:28:54.813229   66930 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/system.slice/crio-2bfd93e00377b8fb597f121c9a96cd08efd1687b85d544f536ebf0f3e2c1764c.scope/container/cgroup.freeze
	I0110 08:28:54.820329   66930 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0110 08:28:54.826783   66930 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0110 08:28:54.826806   66930 status.go:463] ha-352743 apiserver status = Running (err=<nil>)
	I0110 08:28:54.826816   66930 status.go:176] ha-352743 status: &{Name:ha-352743 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 08:28:54.826837   66930 status.go:174] checking status of ha-352743-m02 ...
	I0110 08:28:54.827156   66930 cli_runner.go:164] Run: docker container inspect ha-352743-m02 --format={{.State.Status}}
	I0110 08:28:54.844701   66930 status.go:371] ha-352743-m02 host status = "Stopped" (err=<nil>)
	I0110 08:28:54.844721   66930 status.go:384] host is not running, skipping remaining checks
	I0110 08:28:54.844726   66930 status.go:176] ha-352743-m02 status: &{Name:ha-352743-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 08:28:54.844757   66930 status.go:174] checking status of ha-352743-m03 ...
	I0110 08:28:54.845037   66930 cli_runner.go:164] Run: docker container inspect ha-352743-m03 --format={{.State.Status}}
	I0110 08:28:54.863479   66930 status.go:371] ha-352743-m03 host status = "Running" (err=<nil>)
	I0110 08:28:54.863503   66930 host.go:66] Checking if "ha-352743-m03" exists ...
	I0110 08:28:54.863826   66930 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-352743-m03
	I0110 08:28:54.880620   66930 host.go:66] Checking if "ha-352743-m03" exists ...
	I0110 08:28:54.880939   66930 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 08:28:54.880990   66930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-352743-m03
	I0110 08:28:54.898067   66930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/ha-352743-m03/id_rsa Username:docker}
	I0110 08:28:54.987817   66930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:28:54.999926   66930 kubeconfig.go:125] found "ha-352743" server: "https://192.168.49.254:8443"
	I0110 08:28:54.999951   66930 api_server.go:166] Checking apiserver status ...
	I0110 08:28:55.000000   66930 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 08:28:55.010708   66930 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1167/cgroup
	I0110 08:28:55.018690   66930 ssh_runner.go:195] Run: sudo grep ^0:: /proc/1167/cgroup
	I0110 08:28:55.026067   66930 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/system.slice/crio-87199dcd11d38d5d98ee3870672a0c481b4611c10442b8440e84fb7f62e21ee7.scope/container/cgroup.freeze
	I0110 08:28:55.033145   66930 api_server.go:299] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0110 08:28:55.037064   66930 api_server.go:325] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0110 08:28:55.037084   66930 status.go:463] ha-352743-m03 apiserver status = Running (err=<nil>)
	I0110 08:28:55.037092   66930 status.go:176] ha-352743-m03 status: &{Name:ha-352743-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 08:28:55.037104   66930 status.go:174] checking status of ha-352743-m04 ...
	I0110 08:28:55.037320   66930 cli_runner.go:164] Run: docker container inspect ha-352743-m04 --format={{.State.Status}}
	I0110 08:28:55.054768   66930 status.go:371] ha-352743-m04 host status = "Running" (err=<nil>)
	I0110 08:28:55.054788   66930 host.go:66] Checking if "ha-352743-m04" exists ...
	I0110 08:28:55.055021   66930 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-352743-m04
	I0110 08:28:55.072075   66930 host.go:66] Checking if "ha-352743-m04" exists ...
	I0110 08:28:55.072336   66930 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 08:28:55.072401   66930 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-352743-m04
	I0110 08:28:55.091325   66930 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/ha-352743-m04/id_rsa Username:docker}
	I0110 08:28:55.181859   66930 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:28:55.193970   66930 status.go:176] ha-352743-m04 status: &{Name:ha-352743-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (18.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-352743 node start m02 --alsologtostderr -v 5: (7.663963577s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (117.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 stop --alsologtostderr -v 5
E0110 08:29:19.015866    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-352743 stop --alsologtostderr -v 5: (58.386858357s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 start --wait true --alsologtostderr -v 5
E0110 08:30:34.676040    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/functional-648443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:30:34.681364    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/functional-648443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:30:34.691616    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/functional-648443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:30:34.711837    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/functional-648443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:30:34.752110    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/functional-648443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:30:34.832419    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/functional-648443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:30:34.992817    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/functional-648443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:30:35.313180    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/functional-648443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:30:35.953542    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/functional-648443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:30:37.234549    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/functional-648443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:30:39.796077    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/functional-648443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:30:44.916597    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/functional-648443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:30:55.157090    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/functional-648443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-352743 start --wait true --alsologtostderr -v 5: (59.077697671s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (117.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-352743 node delete m03 --alsologtostderr -v 5: (10.721883395s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (46.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 stop --alsologtostderr -v 5
E0110 08:31:15.637825    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/functional-648443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:31:35.169081    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:31:56.599370    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/functional-648443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-352743 stop --alsologtostderr -v 5: (46.517132413s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-352743 status --alsologtostderr -v 5: exit status 7 (112.361633ms)

                                                
                                                
-- stdout --
	ha-352743
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-352743-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-352743-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:32:01.787547   81262 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:32:01.787800   81262 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:32:01.787810   81262 out.go:374] Setting ErrFile to fd 2...
	I0110 08:32:01.787814   81262 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:32:01.788014   81262 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:32:01.788185   81262 out.go:368] Setting JSON to false
	I0110 08:32:01.788211   81262 mustload.go:66] Loading cluster: ha-352743
	I0110 08:32:01.788241   81262 notify.go:221] Checking for updates...
	I0110 08:32:01.788624   81262 config.go:182] Loaded profile config "ha-352743": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:32:01.788648   81262 status.go:174] checking status of ha-352743 ...
	I0110 08:32:01.789146   81262 cli_runner.go:164] Run: docker container inspect ha-352743 --format={{.State.Status}}
	I0110 08:32:01.807123   81262 status.go:371] ha-352743 host status = "Stopped" (err=<nil>)
	I0110 08:32:01.807142   81262 status.go:384] host is not running, skipping remaining checks
	I0110 08:32:01.807155   81262 status.go:176] ha-352743 status: &{Name:ha-352743 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 08:32:01.807190   81262 status.go:174] checking status of ha-352743-m02 ...
	I0110 08:32:01.807451   81262 cli_runner.go:164] Run: docker container inspect ha-352743-m02 --format={{.State.Status}}
	I0110 08:32:01.825893   81262 status.go:371] ha-352743-m02 host status = "Stopped" (err=<nil>)
	I0110 08:32:01.825917   81262 status.go:384] host is not running, skipping remaining checks
	I0110 08:32:01.825924   81262 status.go:176] ha-352743-m02 status: &{Name:ha-352743-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 08:32:01.825945   81262 status.go:174] checking status of ha-352743-m04 ...
	I0110 08:32:01.826246   81262 cli_runner.go:164] Run: docker container inspect ha-352743-m04 --format={{.State.Status}}
	I0110 08:32:01.843525   81262 status.go:371] ha-352743-m04 host status = "Stopped" (err=<nil>)
	I0110 08:32:01.843561   81262 status.go:384] host is not running, skipping remaining checks
	I0110 08:32:01.843572   81262 status.go:176] ha-352743-m04 status: &{Name:ha-352743-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (46.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (53.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0110 08:32:02.856283    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-352743 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (52.417946186s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (53.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (44.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 node add --control-plane --alsologtostderr -v 5
E0110 08:33:18.521640    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/functional-648443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-352743 node add --control-plane --alsologtostderr -v 5: (43.149444256s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-352743 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (44.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.90s)

                                                
                                    
x
+
TestJSONOutput/start/Command (34.89s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-308285 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-308285 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (34.89343345s)
--- PASS: TestJSONOutput/start/Command (34.89s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.98s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-308285 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-308285 --output=json --user=testUser: (7.979590952s)
--- PASS: TestJSONOutput/stop/Command (7.98s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-795734 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-795734 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (73.287075ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7d10ad24-fed9-4740-80e8-ebe3a15475a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-795734] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"84f3731a-0659-46c4-9267-4eb163da1f35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22427"}}
	{"specversion":"1.0","id":"4954500c-1d42-4bbe-9258-1086ada02718","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fcf0eda6-def8-4e50-abfb-368d193246c3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22427-3641/kubeconfig"}}
	{"specversion":"1.0","id":"97b4bfe6-4197-43f7-a9fa-aacb34913507","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-3641/.minikube"}}
	{"specversion":"1.0","id":"145960e9-6403-450a-b0d7-4f72c245e759","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"9dc4e344-47eb-44c3-99ad-547101a03b0f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1a8e198f-8080-4baa-a814-8dd196736615","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-795734" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-795734
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (26.14s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-109773 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-109773 --network=: (24.054362003s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-109773" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-109773
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-109773: (2.070540262s)
--- PASS: TestKicCustomNetwork/create_custom_network (26.14s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.04s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-908375 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-908375 --network=bridge: (20.044290756s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-908375" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-908375
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-908375: (1.979700876s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.04s)

                                                
                                    
x
+
TestKicExistingNetwork (23.3s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0110 08:35:29.698774    7183 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0110 08:35:29.715294    7183 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0110 08:35:29.715360    7183 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0110 08:35:29.715375    7183 cli_runner.go:164] Run: docker network inspect existing-network
W0110 08:35:29.731800    7183 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0110 08:35:29.731831    7183 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0110 08:35:29.731857    7183 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0110 08:35:29.732018    7183 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0110 08:35:29.749410    7183 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-9da35691088c IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:0a:0c:fc:dc:fc:2f} reservation:<nil>}
I0110 08:35:29.749759    7183 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d6d5e0}
I0110 08:35:29.749795    7183 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0110 08:35:29.749843    7183 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0110 08:35:29.794138    7183 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-160618 --network=existing-network
E0110 08:35:34.679817    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/functional-648443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-160618 --network=existing-network: (21.22765696s)
helpers_test.go:176: Cleaning up "existing-network-160618" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-160618
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-160618: (1.9426183s)
I0110 08:35:52.982189    7183 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (23.30s)

                                                
                                    
x
+
TestKicCustomSubnet (23.7s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-729328 --subnet=192.168.60.0/24
E0110 08:36:02.362269    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/functional-648443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-729328 --subnet=192.168.60.0/24: (21.600752938s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-729328 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-729328" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-729328
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-729328: (2.07869171s)
--- PASS: TestKicCustomSubnet (23.70s)

                                                
                                    
x
+
TestKicStaticIP (23.11s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-496204 --static-ip=192.168.200.200
E0110 08:36:35.169029    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-496204 --static-ip=192.168.200.200: (20.889576505s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-496204 ip
helpers_test.go:176: Cleaning up "static-ip-496204" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-496204
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-496204: (2.080871024s)
--- PASS: TestKicStaticIP (23.11s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (41.96s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-283191 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-283191 --driver=docker  --container-runtime=crio: (16.240921519s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-285398 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-285398 --driver=docker  --container-runtime=crio: (19.795291473s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-283191
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-285398
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-285398" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-285398
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p second-285398: (2.317637614s)
helpers_test.go:176: Cleaning up "first-283191" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-283191
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p first-283191: (2.37096747s)
--- PASS: TestMinikubeProfile (41.96s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.65s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-012597 --memory=3072 --mount-string /tmp/TestMountStartserial4126602969/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-012597 --memory=3072 --mount-string /tmp/TestMountStartserial4126602969/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.646369889s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-012597 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.94s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-030435 --memory=3072 --mount-string /tmp/TestMountStartserial4126602969/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-030435 --memory=3072 --mount-string /tmp/TestMountStartserial4126602969/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.936415944s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-030435 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-012597 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-012597 --alsologtostderr -v=5: (1.661533663s)
--- PASS: TestMountStart/serial/DeleteFirst (1.66s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-030435 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-030435
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-030435: (1.241137239s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.09s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-030435
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-030435: (6.091448191s)
--- PASS: TestMountStart/serial/RestartStopped (7.09s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-030435 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (58.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-247665 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-247665 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (58.495926503s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (58.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-247665 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-247665 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-247665 -- rollout status deployment/busybox: (2.30929529s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-247665 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-247665 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-247665 -- exec busybox-769dd8b7dd-7d86s -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-247665 -- exec busybox-769dd8b7dd-bd895 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-247665 -- exec busybox-769dd8b7dd-7d86s -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-247665 -- exec busybox-769dd8b7dd-bd895 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-247665 -- exec busybox-769dd8b7dd-7d86s -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-247665 -- exec busybox-769dd8b7dd-bd895 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.60s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-247665 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-247665 -- exec busybox-769dd8b7dd-7d86s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-247665 -- exec busybox-769dd8b7dd-7d86s -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-247665 -- exec busybox-769dd8b7dd-bd895 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-247665 -- exec busybox-769dd8b7dd-bd895 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (22.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-247665 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-247665 -v=5 --alsologtostderr: (22.316826719s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (22.95s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-247665 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.64s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 cp testdata/cp-test.txt multinode-247665:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 ssh -n multinode-247665 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 cp multinode-247665:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3464398095/001/cp-test_multinode-247665.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 ssh -n multinode-247665 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 cp multinode-247665:/home/docker/cp-test.txt multinode-247665-m02:/home/docker/cp-test_multinode-247665_multinode-247665-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 ssh -n multinode-247665 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 ssh -n multinode-247665-m02 "sudo cat /home/docker/cp-test_multinode-247665_multinode-247665-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 cp multinode-247665:/home/docker/cp-test.txt multinode-247665-m03:/home/docker/cp-test_multinode-247665_multinode-247665-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 ssh -n multinode-247665 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 ssh -n multinode-247665-m03 "sudo cat /home/docker/cp-test_multinode-247665_multinode-247665-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 cp testdata/cp-test.txt multinode-247665-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 ssh -n multinode-247665-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 cp multinode-247665-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3464398095/001/cp-test_multinode-247665-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 ssh -n multinode-247665-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 cp multinode-247665-m02:/home/docker/cp-test.txt multinode-247665:/home/docker/cp-test_multinode-247665-m02_multinode-247665.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 ssh -n multinode-247665-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 ssh -n multinode-247665 "sudo cat /home/docker/cp-test_multinode-247665-m02_multinode-247665.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 cp multinode-247665-m02:/home/docker/cp-test.txt multinode-247665-m03:/home/docker/cp-test_multinode-247665-m02_multinode-247665-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 ssh -n multinode-247665-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 ssh -n multinode-247665-m03 "sudo cat /home/docker/cp-test_multinode-247665-m02_multinode-247665-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 cp testdata/cp-test.txt multinode-247665-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 ssh -n multinode-247665-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 cp multinode-247665-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3464398095/001/cp-test_multinode-247665-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 ssh -n multinode-247665-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 cp multinode-247665-m03:/home/docker/cp-test.txt multinode-247665:/home/docker/cp-test_multinode-247665-m03_multinode-247665.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 ssh -n multinode-247665-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 ssh -n multinode-247665 "sudo cat /home/docker/cp-test_multinode-247665-m03_multinode-247665.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 cp multinode-247665-m03:/home/docker/cp-test.txt multinode-247665-m02:/home/docker/cp-test_multinode-247665-m03_multinode-247665-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 ssh -n multinode-247665-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 ssh -n multinode-247665-m02 "sudo cat /home/docker/cp-test_multinode-247665-m03_multinode-247665-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.44s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-247665 node stop m03: (1.259260482s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-247665 status: exit status 7 (489.512365ms)

                                                
                                                
-- stdout --
	multinode-247665
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-247665-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-247665-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-247665 status --alsologtostderr: exit status 7 (494.363491ms)

                                                
                                                
-- stdout --
	multinode-247665
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-247665-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-247665-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:39:25.441331  141238 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:39:25.441430  141238 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:39:25.441438  141238 out.go:374] Setting ErrFile to fd 2...
	I0110 08:39:25.441442  141238 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:39:25.441602  141238 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:39:25.441803  141238 out.go:368] Setting JSON to false
	I0110 08:39:25.441830  141238 mustload.go:66] Loading cluster: multinode-247665
	I0110 08:39:25.441875  141238 notify.go:221] Checking for updates...
	I0110 08:39:25.442169  141238 config.go:182] Loaded profile config "multinode-247665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:39:25.442185  141238 status.go:174] checking status of multinode-247665 ...
	I0110 08:39:25.442588  141238 cli_runner.go:164] Run: docker container inspect multinode-247665 --format={{.State.Status}}
	I0110 08:39:25.460803  141238 status.go:371] multinode-247665 host status = "Running" (err=<nil>)
	I0110 08:39:25.460824  141238 host.go:66] Checking if "multinode-247665" exists ...
	I0110 08:39:25.461068  141238 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-247665
	I0110 08:39:25.477939  141238 host.go:66] Checking if "multinode-247665" exists ...
	I0110 08:39:25.478225  141238 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 08:39:25.478271  141238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-247665
	I0110 08:39:25.494910  141238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/multinode-247665/id_rsa Username:docker}
	I0110 08:39:25.584849  141238 ssh_runner.go:195] Run: systemctl --version
	I0110 08:39:25.590971  141238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:39:25.603070  141238 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:39:25.658969  141238 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2026-01-10 08:39:25.649844224 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:39:25.659484  141238 kubeconfig.go:125] found "multinode-247665" server: "https://192.168.67.2:8443"
	I0110 08:39:25.659512  141238 api_server.go:166] Checking apiserver status ...
	I0110 08:39:25.659550  141238 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0110 08:39:25.670526  141238 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1232/cgroup
	I0110 08:39:25.678584  141238 ssh_runner.go:195] Run: sudo grep ^0:: /proc/1232/cgroup
	I0110 08:39:25.685760  141238 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/system.slice/crio-10ddeaf73a9805bc110ed71b966a6cfc36026aa04a75db38e858e684dad0bc18.scope/container/cgroup.freeze
	I0110 08:39:25.692534  141238 api_server.go:299] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0110 08:39:25.697267  141238 api_server.go:325] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0110 08:39:25.697286  141238 status.go:463] multinode-247665 apiserver status = Running (err=<nil>)
	I0110 08:39:25.697294  141238 status.go:176] multinode-247665 status: &{Name:multinode-247665 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 08:39:25.697309  141238 status.go:174] checking status of multinode-247665-m02 ...
	I0110 08:39:25.697538  141238 cli_runner.go:164] Run: docker container inspect multinode-247665-m02 --format={{.State.Status}}
	I0110 08:39:25.714536  141238 status.go:371] multinode-247665-m02 host status = "Running" (err=<nil>)
	I0110 08:39:25.714557  141238 host.go:66] Checking if "multinode-247665-m02" exists ...
	I0110 08:39:25.714835  141238 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-247665-m02
	I0110 08:39:25.732632  141238 host.go:66] Checking if "multinode-247665-m02" exists ...
	I0110 08:39:25.732880  141238 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0110 08:39:25.732918  141238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-247665-m02
	I0110 08:39:25.750029  141238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/22427-3641/.minikube/machines/multinode-247665-m02/id_rsa Username:docker}
	I0110 08:39:25.838548  141238 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0110 08:39:25.861184  141238 status.go:176] multinode-247665-m02 status: &{Name:multinode-247665-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0110 08:39:25.861218  141238 status.go:174] checking status of multinode-247665-m03 ...
	I0110 08:39:25.861533  141238 cli_runner.go:164] Run: docker container inspect multinode-247665-m03 --format={{.State.Status}}
	I0110 08:39:25.878764  141238 status.go:371] multinode-247665-m03 host status = "Stopped" (err=<nil>)
	I0110 08:39:25.878785  141238 status.go:384] host is not running, skipping remaining checks
	I0110 08:39:25.878793  141238 status.go:176] multinode-247665-m03 status: &{Name:multinode-247665-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-247665 node start m03 -v=5 --alsologtostderr: (6.372507546s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.06s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (56.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-247665
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-247665
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-247665: (31.351188754s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-247665 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-247665 --wait=true -v=5 --alsologtostderr: (24.987249979s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-247665
--- PASS: TestMultiNode/serial/RestartKeepsNodes (56.46s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.96s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-247665 node delete m03: (4.374820593s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.96s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (19.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 stop
E0110 08:40:34.675279    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/functional-648443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-247665 stop: (19.250146702s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-247665 status: exit status 7 (95.714674ms)

                                                
                                                
-- stdout --
	multinode-247665
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-247665-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-247665 status --alsologtostderr: exit status 7 (95.13132ms)

                                                
                                                
-- stdout --
	multinode-247665
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-247665-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:40:53.756254  150164 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:40:53.756485  150164 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:40:53.756492  150164 out.go:374] Setting ErrFile to fd 2...
	I0110 08:40:53.756496  150164 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:40:53.756664  150164 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:40:53.756837  150164 out.go:368] Setting JSON to false
	I0110 08:40:53.756863  150164 mustload.go:66] Loading cluster: multinode-247665
	I0110 08:40:53.756994  150164 notify.go:221] Checking for updates...
	I0110 08:40:53.757191  150164 config.go:182] Loaded profile config "multinode-247665": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:40:53.757204  150164 status.go:174] checking status of multinode-247665 ...
	I0110 08:40:53.757628  150164 cli_runner.go:164] Run: docker container inspect multinode-247665 --format={{.State.Status}}
	I0110 08:40:53.777325  150164 status.go:371] multinode-247665 host status = "Stopped" (err=<nil>)
	I0110 08:40:53.777342  150164 status.go:384] host is not running, skipping remaining checks
	I0110 08:40:53.777348  150164 status.go:176] multinode-247665 status: &{Name:multinode-247665 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0110 08:40:53.777379  150164 status.go:174] checking status of multinode-247665-m02 ...
	I0110 08:40:53.777620  150164 cli_runner.go:164] Run: docker container inspect multinode-247665-m02 --format={{.State.Status}}
	I0110 08:40:53.794900  150164 status.go:371] multinode-247665-m02 host status = "Stopped" (err=<nil>)
	I0110 08:40:53.794950  150164 status.go:384] host is not running, skipping remaining checks
	I0110 08:40:53.794968  150164 status.go:176] multinode-247665-m02 status: &{Name:multinode-247665-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (19.44s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (46.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-247665 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E0110 08:41:35.169019    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-247665 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (46.301425068s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-247665 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (46.89s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (21.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-247665
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-247665-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-247665-m02 --driver=docker  --container-runtime=crio: exit status 14 (69.516459ms)

                                                
                                                
-- stdout --
	* [multinode-247665-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22427-3641/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-3641/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-247665-m02' is duplicated with machine name 'multinode-247665-m02' in profile 'multinode-247665'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-247665-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-247665-m03 --driver=docker  --container-runtime=crio: (19.151585785s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-247665
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-247665: exit status 80 (291.563163ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-247665 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-247665-m03 already exists in multinode-247665-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-247665-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-247665-m03: (2.319954216s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (21.89s)

                                                
                                    
x
+
TestScheduledStopUnix (96.17s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-701534 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-701534 --memory=3072 --driver=docker  --container-runtime=crio: (19.708865656s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-701534 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I0110 08:42:26.528542  160075 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:42:26.529181  160075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:42:26.529197  160075 out.go:374] Setting ErrFile to fd 2...
	I0110 08:42:26.529204  160075 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:42:26.530184  160075 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:42:26.530471  160075 out.go:368] Setting JSON to false
	I0110 08:42:26.530560  160075 mustload.go:66] Loading cluster: scheduled-stop-701534
	I0110 08:42:26.530917  160075 config.go:182] Loaded profile config "scheduled-stop-701534": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:42:26.530992  160075 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/scheduled-stop-701534/config.json ...
	I0110 08:42:26.531179  160075 mustload.go:66] Loading cluster: scheduled-stop-701534
	I0110 08:42:26.531271  160075 config.go:182] Loaded profile config "scheduled-stop-701534": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-701534 -n scheduled-stop-701534
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-701534 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I0110 08:42:26.932226  160231 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:42:26.932337  160231 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:42:26.932345  160231 out.go:374] Setting ErrFile to fd 2...
	I0110 08:42:26.932349  160231 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:42:26.932535  160231 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:42:26.932779  160231 out.go:368] Setting JSON to false
	I0110 08:42:26.932977  160231 daemonize_unix.go:73] killing process 160111 as it is an old scheduled stop
	I0110 08:42:26.933088  160231 mustload.go:66] Loading cluster: scheduled-stop-701534
	I0110 08:42:26.933426  160231 config.go:182] Loaded profile config "scheduled-stop-701534": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:42:26.933512  160231 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/scheduled-stop-701534/config.json ...
	I0110 08:42:26.933705  160231 mustload.go:66] Loading cluster: scheduled-stop-701534
	I0110 08:42:26.933843  160231 config.go:182] Loaded profile config "scheduled-stop-701534": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I0110 08:42:26.937875    7183 retry.go:84] will retry after 0s: open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/scheduled-stop-701534/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-701534 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-701534 -n scheduled-stop-701534
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-701534
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-701534 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I0110 08:42:52.784016  160942 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:42:52.784102  160942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:42:52.784110  160942 out.go:374] Setting ErrFile to fd 2...
	I0110 08:42:52.784114  160942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:42:52.784336  160942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:42:52.784555  160942 out.go:368] Setting JSON to false
	I0110 08:42:52.784628  160942 mustload.go:66] Loading cluster: scheduled-stop-701534
	I0110 08:42:52.784902  160942 config.go:182] Loaded profile config "scheduled-stop-701534": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:42:52.784970  160942 profile.go:143] Saving config to /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/scheduled-stop-701534/config.json ...
	I0110 08:42:52.785154  160942 mustload.go:66] Loading cluster: scheduled-stop-701534
	I0110 08:42:52.785242  160942 config.go:182] Loaded profile config "scheduled-stop-701534": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0

                                                
                                                
** /stderr **
E0110 08:42:58.218788    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-701534
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-701534: exit status 7 (79.303467ms)

                                                
                                                
-- stdout --
	scheduled-stop-701534
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-701534 -n scheduled-stop-701534
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-701534 -n scheduled-stop-701534: exit status 7 (77.941422ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-701534" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-701534
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-701534: (4.985033628s)
--- PASS: TestScheduledStopUnix (96.17s)

                                                
                                    
x
+
TestInsufficientStorage (11.59s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-221766 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-221766 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (9.114617569s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e99dfe92-1477-4688-b5e3-313d1c14839c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-221766] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3da18166-d7cf-4acd-96e4-25b23fa103a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22427"}}
	{"specversion":"1.0","id":"fbe644aa-4741-49fb-b343-57fae2ca44e7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6d7d8701-e4d2-4157-8214-70e955881175","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22427-3641/kubeconfig"}}
	{"specversion":"1.0","id":"83066b81-a578-43f7-bfb8-499c9051380b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-3641/.minikube"}}
	{"specversion":"1.0","id":"ee00914d-58e2-4131-8152-f80b077697ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"db2f1bd2-16e1-4971-a8a0-09484b5b710b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"97067b42-5144-49bc-a95f-5f7ddd86e544","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"b5eb606e-1711-4440-9e50-75f9dbaa3548","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"f3ea7e3e-d77c-498d-805e-fef11cca60a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"875677ee-c594-4219-ab8a-c716aae183ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"c1d0b176-c3dc-4800-82e3-303408895539","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-221766\" primary control-plane node in \"insufficient-storage-221766\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"92df2f94-fccd-4f6e-920b-a769ba682e92","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1767944074-22401 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"1acfba21-f106-4867-b09c-a45400a38509","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"5f380b00-8f55-4048-9e74-1929ae904b18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-221766 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-221766 --output=json --layout=cluster: exit status 7 (286.945411ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-221766","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-221766","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0110 08:43:52.325132  163466 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-221766" does not appear in /home/jenkins/minikube-integration/22427-3641/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-221766 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-221766 --output=json --layout=cluster: exit status 7 (279.36336ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-221766","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-221766","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0110 08:43:52.605630  163577 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-221766" does not appear in /home/jenkins/minikube-integration/22427-3641/kubeconfig
	E0110 08:43:52.615811  163577 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/insufficient-storage-221766/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-221766" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-221766
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-221766: (1.905312649s)
--- PASS: TestInsufficientStorage (11.59s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (291.85s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.2474146280 start -p running-upgrade-322245 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.2474146280 start -p running-upgrade-322245 --memory=3072 --vm-driver=docker  --container-runtime=crio: (21.332579312s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-322245 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-322245 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m27.644990121s)
helpers_test.go:176: Cleaning up "running-upgrade-322245" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-322245
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-322245: (2.185340722s)
--- PASS: TestRunningBinaryUpgrade (291.85s)

                                                
                                    
x
+
TestKubernetesUpgrade (158.73s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-182534 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-182534 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (25.057781225s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-182534 --alsologtostderr
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-182534 --alsologtostderr: (2.112999315s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-182534 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-182534 status --format={{.Host}}: exit status 7 (111.447655ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-182534 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-182534 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (2m3.560893634s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-182534 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-182534 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-182534 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (98.359996ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-182534] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22427-3641/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-3641/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-182534
	    minikube start -p kubernetes-upgrade-182534 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1825342 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0, by running:
	    
	    minikube start -p kubernetes-upgrade-182534 --kubernetes-version=v1.35.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-182534 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-182534 --memory=3072 --kubernetes-version=v1.35.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4.923089069s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-182534" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-182534
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-182534: (2.783666362s)
--- PASS: TestKubernetesUpgrade (158.73s)

                                                
                                    
x
+
TestMissingContainerUpgrade (92.27s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.3525341215 start -p missing-upgrade-854643 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.3525341215 start -p missing-upgrade-854643 --memory=3072 --driver=docker  --container-runtime=crio: (44.460015241s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-854643
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-854643: (2.013141627s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-854643
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-854643 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-854643 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (42.554711749s)
helpers_test.go:176: Cleaning up "missing-upgrade-854643" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-854643
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-854643: (2.540005848s)
--- PASS: TestMissingContainerUpgrade (92.27s)

                                                
                                    
x
+
TestPause/serial/Start (59.66s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-678123 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-678123 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (59.664310846s)
--- PASS: TestPause/serial/Start (59.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.54s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.54s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (304.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.2048029683 start -p stopped-upgrade-761816 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.2048029683 start -p stopped-upgrade-761816 --memory=3072 --vm-driver=docker  --container-runtime=crio: (45.002263197s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.2048029683 -p stopped-upgrade-761816 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.2048029683 -p stopped-upgrade-761816 stop: (1.897311446s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-761816 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-761816 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m17.438400617s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (304.34s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.25s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-678123 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-678123 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.240103955s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-565281 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-565281 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (74.855073ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-565281] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22427-3641/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-3641/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.07s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (21.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-565281 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-565281 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (21.447894863s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-565281 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (21.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-472660 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-472660 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (185.573358ms)

                                                
                                                
-- stdout --
	* [false-472660] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22427
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22427-3641/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-3641/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0110 08:45:35.119223  192615 out.go:360] Setting OutFile to fd 1 ...
	I0110 08:45:35.119493  192615 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:45:35.119504  192615 out.go:374] Setting ErrFile to fd 2...
	I0110 08:45:35.119511  192615 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0110 08:45:35.119792  192615 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22427-3641/.minikube/bin
	I0110 08:45:35.120402  192615 out.go:368] Setting JSON to false
	I0110 08:45:35.121812  192615 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1687,"bootTime":1768033048,"procs":284,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0110 08:45:35.121887  192615 start.go:143] virtualization: kvm guest
	I0110 08:45:35.124077  192615 out.go:179] * [false-472660] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0110 08:45:35.125539  192615 out.go:179]   - MINIKUBE_LOCATION=22427
	I0110 08:45:35.125589  192615 notify.go:221] Checking for updates...
	I0110 08:45:35.131161  192615 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0110 08:45:35.132634  192615 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22427-3641/kubeconfig
	I0110 08:45:35.134366  192615 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22427-3641/.minikube
	I0110 08:45:35.138265  192615 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0110 08:45:35.139425  192615 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0110 08:45:35.141149  192615 config.go:182] Loaded profile config "NoKubernetes-565281": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:45:35.141308  192615 config.go:182] Loaded profile config "kubernetes-upgrade-182534": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
	I0110 08:45:35.141427  192615 config.go:182] Loaded profile config "stopped-upgrade-761816": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I0110 08:45:35.141535  192615 driver.go:422] Setting default libvirt URI to qemu:///system
	I0110 08:45:35.168708  192615 docker.go:124] docker version: linux-29.1.4:Docker Engine - Community
	I0110 08:45:35.168826  192615 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0110 08:45:35.231769  192615 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2026-01-10 08:45:35.220670449 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.4 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:dea7da592f5d1d2b7755e3a161be07f43fad8f75 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.1] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0110 08:45:35.231909  192615 docker.go:319] overlay module found
	I0110 08:45:35.233919  192615 out.go:179] * Using the docker driver based on user configuration
	I0110 08:45:35.235038  192615 start.go:309] selected driver: docker
	I0110 08:45:35.235058  192615 start.go:928] validating driver "docker" against <nil>
	I0110 08:45:35.235072  192615 start.go:939] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0110 08:45:35.236914  192615 out.go:203] 
	W0110 08:45:35.238181  192615 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0110 08:45:35.239336  192615 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-472660 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-472660

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-472660

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-472660

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-472660

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-472660

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-472660

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-472660

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-472660

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-472660

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-472660

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-472660"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-472660"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-472660"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-472660

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-472660"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-472660"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-472660" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-472660" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-472660" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-472660" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-472660" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-472660" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-472660" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-472660" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-472660"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-472660"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-472660"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-472660"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-472660"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-472660" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-472660" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-472660" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-472660"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-472660"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-472660"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-472660"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-472660"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22427-3641/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 10 Jan 2026 08:44:48 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-761816
contexts:
- context:
cluster: stopped-upgrade-761816
user: stopped-upgrade-761816
name: stopped-upgrade-761816
current-context: ""
kind: Config
users:
- name: stopped-upgrade-761816
user:
client-certificate: /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/stopped-upgrade-761816/client.crt
client-key: /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/stopped-upgrade-761816/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-472660

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-472660"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-472660"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-472660"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-472660"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-472660"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-472660"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-472660"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-472660"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-472660"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-472660"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-472660"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-472660"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-472660"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-472660"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-472660"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-472660"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-472660"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-472660"

                                                
                                                
----------------------- debugLogs end: false-472660 [took: 4.554180285s] --------------------------------
helpers_test.go:176: Cleaning up "false-472660" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-472660
--- PASS: TestNetworkPlugins/group/false (4.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-565281 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-565281 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (7.6513096s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-565281 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-565281 status -o json: exit status 2 (311.055719ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-565281","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-565281
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-565281: (1.995554538s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-565281 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-565281 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (4.512104425s)
--- PASS: TestNoKubernetes/serial/Start (4.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22427-3641/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-565281 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-565281 "sudo systemctl is-active --quiet service kubelet": exit status 1 (293.611042ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (34.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (18.203276685s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
E0110 08:46:35.168993    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/addons-910183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:204: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (16.149466698s)
--- PASS: TestNoKubernetes/serial/ProfileList (34.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-565281
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-565281: (1.258211021s)
--- PASS: TestNoKubernetes/serial/Stop (1.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-565281 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-565281 --driver=docker  --container-runtime=crio: (6.307740914s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-565281 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-565281 "sudo systemctl is-active --quiet service kubelet": exit status 1 (275.328001ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestPreload/Start-NoPreload-PullImage (49.63s)

                                                
                                                
=== RUN   TestPreload/Start-NoPreload-PullImage
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-426282 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-426282 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (42.920703554s)
preload_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-426282 image pull ghcr.io/medyagh/image-mirrors/busybox:latest
preload_test.go:62: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-426282
preload_test.go:62: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-426282: (6.193465467s)
--- PASS: TestPreload/Start-NoPreload-PullImage (49.63s)

                                                
                                    
x
+
TestPreload/Restart-With-Preload-Check-User-Image (43.59s)

                                                
                                                
=== RUN   TestPreload/Restart-With-Preload-Check-User-Image
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-426282 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-426282 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (43.304806132s)
preload_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-426282 image list
--- PASS: TestPreload/Restart-With-Preload-Check-User-Image (43.59s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-761816
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (40.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-472660 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-472660 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (40.770452973s)
--- PASS: TestNetworkPlugins/group/auto/Start (40.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (41.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-472660 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-472660 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (41.752593696s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (41.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (48.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-472660 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-472660 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (48.911403913s)
--- PASS: TestNetworkPlugins/group/calico/Start (48.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-472660 "pgrep -a kubelet"
I0110 08:49:51.879061    7183 config.go:182] Loaded profile config "auto-472660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-472660 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-jcsq4" [4c6e1307-aca9-414d-91fe-5e8629c80a75] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-jcsq4" [4c6e1307-aca9-414d-91fe-5e8629c80a75] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004451877s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-djvrn" [e6bd6900-f6bb-4bfb-b237-755a20f3d559] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004554559s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-472660 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-472660 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-472660 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-472660 "pgrep -a kubelet"
I0110 08:50:04.606751    7183 config.go:182] Loaded profile config "kindnet-472660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-472660 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-zsqrb" [d05e3d30-e940-444b-88da-8d30f153aaa0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-zsqrb" [d05e3d30-e940-444b-88da-8d30f153aaa0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004090935s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-472660 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-472660 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-472660 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-8t5gs" [0fe6eb66-58c9-4a52-bbc3-acb207f99943] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005273171s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (46.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-472660 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-472660 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (46.139782767s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (46.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-472660 "pgrep -a kubelet"
I0110 08:50:25.800263    7183 config.go:182] Loaded profile config "calico-472660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-472660 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-lzxkl" [41923d25-711e-43ca-b47e-43ed43e31bb7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-lzxkl" [41923d25-711e-43ca-b47e-43ed43e31bb7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004079297s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (61.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-472660 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-472660 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m1.689345211s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (61.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-472660 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-472660 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-472660 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (67.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-472660 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-472660 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m7.194532814s)
--- PASS: TestNetworkPlugins/group/bridge/Start (67.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-472660 "pgrep -a kubelet"
I0110 08:51:06.465437    7183 config.go:182] Loaded profile config "custom-flannel-472660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-472660 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-2fnnn" [e5f5fa72-bee6-4813-9157-72a90ac07011] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-2fnnn" [e5f5fa72-bee6-4813-9157-72a90ac07011] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.003642105s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-472660 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-472660 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-472660 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (47.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-472660 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-472660 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (47.237468223s)
--- PASS: TestNetworkPlugins/group/flannel/Start (47.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-472660 "pgrep -a kubelet"
I0110 08:51:36.890918    7183 config.go:182] Loaded profile config "enable-default-cni-472660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-472660 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-pfs5p" [31ff04fe-ab26-42c4-8e73-1e7d6b5d5480] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-pfs5p" [31ff04fe-ab26-42c4-8e73-1e7d6b5d5480] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003757361s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-472660 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-472660 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-472660 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (53.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-093083 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-093083 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (53.218042965s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (53.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-472660 "pgrep -a kubelet"
I0110 08:52:03.768256    7183 config.go:182] Loaded profile config "bridge-472660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-472660 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-mq8zf" [de1f550c-f943-49a6-8c42-47b10c003108] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-mq8zf" [de1f550c-f943-49a6-8c42-47b10c003108] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.00373655s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (47.69s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-095312 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-095312 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (47.690826544s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (47.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-472660 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-472660 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-472660 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-hwz5n" [9181703d-8958-4a15-9e6a-29a82f50ba63] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004067992s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-472660 "pgrep -a kubelet"
I0110 08:52:29.508154    7183 config.go:182] Loaded profile config "flannel-472660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-472660 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-5dd4ccdc4b-js8m6" [76584310-f4a3-4959-8c94-0631e4800440] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-5dd4ccdc4b-js8m6" [76584310-f4a3-4959-8c94-0631e4800440] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.006047707s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (42.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-072273 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-072273 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (42.669400431s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (42.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-472660 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-472660 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-472660 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-093083 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [79d1d319-c830-45c8-ae4c-0e12a1b99481] Pending
helpers_test.go:353: "busybox" [79d1d319-c830-45c8-ae4c-0e12a1b99481] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [79d1d319-c830-45c8-ae4c-0e12a1b99481] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003626087s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-093083 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-095312 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [b48219ff-c748-4c50-bc09-518ec890a3b3] Pending
helpers_test.go:353: "busybox" [b48219ff-c748-4c50-bc09-518ec890a3b3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [b48219ff-c748-4c50-bc09-518ec890a3b3] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004414653s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-095312 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (40.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-225354 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-225354 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (40.434586908s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (40.43s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-093083 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-093083 --alsologtostderr -v=3: (16.10458272s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-095312 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-095312 --alsologtostderr -v=3: (16.261389379s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-072273 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [87bb5117-4f07-448e-bd80-5c13abfe1ede] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [87bb5117-4f07-448e-bd80-5c13abfe1ede] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.004201467s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-072273 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-093083 -n old-k8s-version-093083
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-093083 -n old-k8s-version-093083: exit status 7 (80.829696ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-093083 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (44.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-093083 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-093083 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (44.546174567s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-093083 -n old-k8s-version-093083
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (44.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-095312 -n no-preload-095312
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-095312 -n no-preload-095312: exit status 7 (80.059256ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-095312 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (49.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-095312 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-095312 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (49.511851381s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-095312 -n no-preload-095312
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (49.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (18.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-072273 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-072273 --alsologtostderr -v=3: (18.942336068s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (18.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-225354 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [b4493b91-1903-4206-9dce-fe0d85c95ef9] Pending
helpers_test.go:353: "busybox" [b4493b91-1903-4206-9dce-fe0d85c95ef9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [b4493b91-1903-4206-9dce-fe0d85c95ef9] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004117977s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-225354 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-072273 -n embed-certs-072273
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-072273 -n embed-certs-072273: exit status 7 (97.012989ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-072273 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (43.63s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-072273 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-072273 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (43.240965383s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-072273 -n embed-certs-072273
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (43.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (18.58s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-225354 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-225354 --alsologtostderr -v=3: (18.581842045s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (18.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-dtt5w" [8a543484-64c8-459a-9754-8b99619ce408] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003315685s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-225354 -n default-k8s-diff-port-225354
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-225354 -n default-k8s-diff-port-225354: exit status 7 (77.158354ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-225354 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (47.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-225354 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-225354 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (46.9042723s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-225354 -n default-k8s-diff-port-225354
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (47.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-dtt5w" [8a543484-64c8-459a-9754-8b99619ce408] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003800955s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-093083 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-pjbvx" [3fc9f1c6-c4c8-491b-b64c-4b1110839007] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003443149s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-093083 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-pjbvx" [3fc9f1c6-c4c8-491b-b64c-4b1110839007] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004939796s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-095312 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-095312 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (26.13s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-582650 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-582650 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (26.12982298s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (26.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-8m7lj" [8ecbee7b-e01b-4fff-819b-d04e5c0def03] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004080188s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs (4.18s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-gcs-424382 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
preload_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-dl-gcs-424382 --download-only --kubernetes-version v1.34.0-rc.1 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio: (3.972806901s)
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-424382" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-gcs-424382
--- PASS: TestPreload/PreloadSrc/gcs (4.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-8m7lj" [8ecbee7b-e01b-4fff-819b-d04e5c0def03] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003999389s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-072273 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestPreload/PreloadSrc/github (4.44s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/github
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-github-434342 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
preload_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-dl-github-434342 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=github --alsologtostderr --v=1 --driver=docker  --container-runtime=crio: (3.692741322s)
helpers_test.go:176: Cleaning up "test-preload-dl-github-434342" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-github-434342
--- PASS: TestPreload/PreloadSrc/github (4.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-072273 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestPreload/PreloadSrc/gcs-cached (0.56s)

                                                
                                                
=== RUN   TestPreload/PreloadSrc/gcs-cached
preload_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-dl-gcs-cached-077581 --download-only --kubernetes-version v1.34.0-rc.2 --preload-source=gcs --alsologtostderr --v=1 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "test-preload-dl-gcs-cached-077581" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-dl-gcs-cached-077581
--- PASS: TestPreload/PreloadSrc/gcs-cached (0.56s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.54s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-582650 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-582650 --alsologtostderr -v=3: (2.540041374s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-4pp7j" [386e2b26-8e57-4a4d-877c-c25fd95f9406] Running
E0110 08:54:58.225476    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/kindnet-472660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:54:58.230934    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/kindnet-472660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:54:58.241291    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/kindnet-472660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:54:58.261604    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/kindnet-472660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:54:58.301885    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/kindnet-472660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:54:58.382234    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/kindnet-472660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:54:58.543242    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/kindnet-472660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:54:58.864001    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/kindnet-472660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:54:59.505233    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/kindnet-472660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00351464s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-582650 -n newest-cni-582650
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-582650 -n newest-cni-582650: exit status 7 (82.640334ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-582650 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-582650 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0
E0110 08:55:00.786134    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/kindnet-472660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:55:02.349022    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/auto-472660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0110 08:55:03.347054    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/kindnet-472660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-582650 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0: (9.709596586s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-582650 -n newest-cni-582650
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-4pp7j" [386e2b26-8e57-4a4d-877c-c25fd95f9406] Running
E0110 08:55:08.467442    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/kindnet-472660/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004219363s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-225354 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-225354 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-582650 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    

Test skip (27/332)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:765: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
E0110 08:45:34.675988    7183 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/functional-648443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
panic.go:615: 
----------------------- debugLogs start: kubenet-472660 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-472660

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-472660

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-472660

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-472660

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-472660

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-472660

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-472660

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-472660

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-472660

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-472660

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-472660"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-472660"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-472660"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-472660

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-472660"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-472660"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-472660" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-472660" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-472660" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-472660" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-472660" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-472660" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-472660" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-472660" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-472660"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-472660"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-472660"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-472660"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-472660"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-472660" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-472660" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-472660" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-472660"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-472660"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-472660"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-472660"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-472660"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22427-3641/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 10 Jan 2026 08:44:48 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-761816
contexts:
- context:
cluster: stopped-upgrade-761816
user: stopped-upgrade-761816
name: stopped-upgrade-761816
current-context: ""
kind: Config
users:
- name: stopped-upgrade-761816
user:
client-certificate: /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/stopped-upgrade-761816/client.crt
client-key: /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/stopped-upgrade-761816/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-472660

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-472660"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-472660"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-472660"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-472660"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-472660"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-472660"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-472660"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-472660"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-472660"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-472660"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-472660"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-472660"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-472660"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-472660"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-472660"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-472660"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-472660"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-472660"

                                                
                                                
----------------------- debugLogs end: kubenet-472660 [took: 3.501917844s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-472660" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-472660
--- SKIP: TestNetworkPlugins/group/kubenet (3.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-472660 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-472660

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-472660

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-472660

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-472660

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-472660

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-472660

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-472660

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-472660

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-472660

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-472660

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472660"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472660"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472660"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-472660

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472660"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472660"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-472660" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-472660" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-472660" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-472660" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-472660" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-472660" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-472660" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-472660" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472660"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472660"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472660"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472660"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472660"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-472660

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-472660

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-472660" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-472660" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-472660

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-472660

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-472660" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-472660" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-472660" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-472660" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-472660" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472660"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472660"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472660"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472660"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472660"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22427-3641/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 10 Jan 2026 08:44:48 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: stopped-upgrade-761816
contexts:
- context:
cluster: stopped-upgrade-761816
user: stopped-upgrade-761816
name: stopped-upgrade-761816
current-context: ""
kind: Config
users:
- name: stopped-upgrade-761816
user:
client-certificate: /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/stopped-upgrade-761816/client.crt
client-key: /home/jenkins/minikube-integration/22427-3641/.minikube/profiles/stopped-upgrade-761816/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-472660

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472660"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472660"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472660"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472660"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472660"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472660"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472660"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472660"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472660"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472660"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472660"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472660"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472660"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472660"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472660"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472660"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472660"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-472660" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-472660"

                                                
                                                
----------------------- debugLogs end: cilium-472660 [took: 4.866369141s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-472660" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-472660
--- SKIP: TestNetworkPlugins/group/cilium (5.06s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-847921" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-847921
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
Copied to clipboard